I'm using STM32F401RE Nucleo board to measure the ambient temperature. After the sampling process, I receive a digital value from ADC_CHANNEL_TEMPERATURE and I want to convert this digital value into C°. I searched on the internet for this and I found two different methods:
Method 1: Page 226 in http://www.st.com/content/ccc/resource/technical/document
Temp(degree) = (V_sense - V_25)/Avg_slope + 25
Method 2: Page 251 in http://www.st.com/content/ccc/resource/technical/document
Temp(degree) = ( ( (110 - 30)*(TS_DATA - TS_CAL1) ) / (TS_CAL_2 - TS_CAL_1) ) + 30
Where:
- TS_CAL2: temperature sensor calibration value at 110 C°
- TS_CAL1: temperature sensor calibration value at 30 C°
- TS_DATA: temperature sensor output from ADC
It confuses me which one is the correct formula for calculating the temperature in C°.
Although Method 1 is from reference manual of STM32F401, the temperature result doesn't look correctly. While Method 2 from reference manual of STM32F0 series, it looks more reasonable.
Still I don't know which formula should I apply when using STM32F401RE Nucleo board?
Method 1 Temp(degree) = (V_sense - V_25)/Avg_slope + 25 is a simplified version where calibration is presumably done by pre-measuring the value at 25° and assigning it to V_25. In this context, Avg_slope is probably taken from datasheet - but it could be also a result of some calibration.
Method 2 Temp(degree) = ( ( (110 - 30)*(TS_DATA - TS_CAL1) ) / (TS_CAL_2 - TS_CAL_1) ) + 30 uses TWO calibration points, at 30° and 110°, and is more correct. Note that also method 1 can use two calibration points (used to calculate average slope). Also, method 2 would let you to take your calibration points anywhere (presumably, in the range where you are more interested in).
Both the methods, however, suffer from non-linearity (if any) of the sensor. I suppose that some non-linearity is present, because method 1 tells about "average slope".
If you want greater precision, you can take several calibration points and interpolate between them.
i am currently using the microcontroller stm32 f030 c8t6
:question:Is TS_DATA=(ADC Value)*(Vdd/Vref) or TS_DATA=(ADC Value) the temperature sensor adc value when the temperature sensor channel is activated ??
Related
Currently I am trying to do some experiments to try to determine thermal conductivity of my fluid which is ethanol.
To do so, I need to use the principle of TPS method which correspond to the kind of sensor I have.
I would like to plot on python ∆D(τ) function of τ and also, ∆T function of D(τ)
Basically, I have these formula which correspond to ∆D(τ).
formula ∆D(τ)
other variable
other variables
where Io is a modified Bessel function.
The paper that I am reading contains the following information that might help.
"From Eq X (D thau), we can see that the average temperature
increase in the hot disk sensor is proportional to a function
D(τ), which is a rather complicated function of a dimen-
sionless parameter τ = √κt/a, but, numerically, it can be
accurately evaluated to five or six significant figures.
When using the hot disk technique to determine thermal
transport properties, a constant electric current is supplied
to the sensor at time t = 0, then the temperature change of
the sensor is recorded as a function of time. The average
temperature increase across the hot disk sensor area can be
measured by monitoring the total resistance of the hot disk
sensor:
R = R0[1 + α ̄deltaT (t)], (28)
where R is the total electrical resistance at time t, R0 is the
initial resistance at t = 0, α is the temperature coefficient of
resistivity, which is well known for nickel. Eq. (28) allows us
to accurately determine ∆T as a function of time.
If one knows the relationship between t and τ, one can
plot ̄∆T as a function of D(τ), and a straight line should
be obtained. The slope of that line is P0/(π3/2aK), from
which thermal conductivity K can be calculated. However,
the proper value of τ is generally unknown, since τ = √κt/a
and the thermal diffusivity κ is unknown. To calculate the
thermal conductivity correctly, one normally makes a series
of computational plots of ∆T versus D(τ) for a range of κ
values. The correct value of κ will yield a straight line for the
∆T versus D(τ) plot. This optimization process can be done
by the software until an optimized value of κ is found. In
practice, we can measure the density and the specific heat of
the material separately, so that between K and κ, there is only
one independent parameter. Therefore, both thermal conduc-
tivity and thermal diffusivity of the sample can be obtained
from above procedure based on the transient measurement
using a hot disk sensor"
So if I understood I need to plot ∆T versus D(τ) from which for a certain value of characteristic time which would give me a straight line. However when I am trying to do so I will always obtain a straight line. the part that I'm not sure if the value of the modified bessel function. Please find attached my script .
from numpy import *
import scipy.special
from scipy.integrate import quad
from matplotlib.pyplot import *
def integer1(sigma):
return 1/(sigma**2)
tini = 0.015
tfin = 15
time = linspace(tini,tfin,num=1000)
n=7 # number of concentric circles of sensor
L = 1
L0 = np.i0(l*k)/(2*thau**2*n**2)
P0 = 0.1 #power
k = 1 #thermal diffusivity
a = 0.000958 # radius of biggest ring
λ = 0.169 #thermal conductivity of ethanol (im not sure if this is ok)
x=linspace(0.00000001,0.3,1000)
for K in range (0,len(x)):
# print (x[K])
theta = a**2/x[K]
Tlist = []
Dlist = []
for t in time:
thau = sqrt(t/theta)
som = 0
for l in range(L,n):
for k in range(1,n):
som += l*k*exp((-l**2+-k**2)/(4*thau**2*n**2))*L0
I = quad(integer1, 0, thau)
D = ((n*(n+1))**-2)*I[0]*som
T = (P0/(pi**(3/2)*a*λ))*D
Tlist.append(T)
Dlist.append(D)
figure(1)
plot(Dlist,Tlist)
show()
I am trying to the calculation from time 0,015 seconds until 15 seconds with 1000 points in total..0,015, 0,030, 0,045 and so on...
and I for my K I am going from values of 0.00000001 until 0.3 with 1000 points in total
The paper that I am looking at is called:
"Rapid thermal conductivity measurement with
a hot disk sensor. Part 1. Theoretical considerations"
I hope you could help with this one.
Thank you
I am trying to enhance an ultrasonic signal by spectral subtraction. The signal is in the time domain and contains noise. I have divided the signal into Hamming windows of 2 µs and calculated the Fourier transforms of those frames. Then I selected 3 consecutive frames which I interpreted as noise. I averaged the magnitude spectra of those 3 frames and subtracted that average from every single frame's magnitude spectrum. Then I defined all negative magnitude spectra as zero and reconstructed the enhanced Fourier transform by combining the new magnitude spectra with the phase spectra. This gives me a series of complex numbers per frame. Now I would like to transform this series back to the time domain by using the inverse Fourier transform. However, this operation provides me with complex numbers which I do not know how to use.
I have read in a couple of posts that it is normal to obtain a complex output from inverse Fourier transformation. However the use of the complex numbers is divided. Some say to neglect the imaginary part, because it is supposed to be very small (e-15) but in my case it is not negligible (0.01-0.5). To be honest I just do not know what to do with the numbers now, because I expected the inverse Fourier transform to give me real number only. And hope for very small imaginary parts, but unfortunately..
# General parameters
#
total_samples = length(time_or) # Total numbers of samples in the current series
max_time = max(time_or) # Length of the measurement in microseconds
sampling_freq = 1/(max_time/1000000)*total_samples # Sampling frequency
frame_length_t = 2 # In microseconds (time)
frame_length_s = round(frame_length_t/1000000*sampling_freq) # In samples per frame
overlap = frame_length_s/2 # Overlap in number of frames, set to 50% overlap
#
# Transform the frame to frequency domain
#
fft_frames = specgram(amp, n=frame_length_s, Fs=125, window=hamming(frame_length_s), overlap=overlap)
mag_spec=abs(fft_frames[["S"]])
phase_spec=atan(Im(fft_frames[["S"]])/Re(fft_frames[["S"]]))
#
# Determine the arrival time of noise
#
cutoff= 10 #determine the percentage of the signal that has to be cut off
dnr=us_data[(length(us_data[,1])*(cutoff/100)):length(us_data[,1]), ]
noise_arr=(length(us_data[,1])-length(dnr[,1])+min(which(dnr[,2]>0.01)))*0.008
#
# Select the frames for noise spectrum estimation
#
noise_spec=0
noise_spec=mag_spec[,noise_arr]
noise_spec=noise_spec+mag_spec[, (noise_arr+1)]
noise_spec=noise_spec+mag_spec[, (noise_arr+2)]
noise_spec_check=noise_spec/3
#
# Subtract the estimated noise spectrum from every frame
#
est_mag_spec=mag_spec-noise_spec_check
est_mag_spec[est_mag_spec < 0] = 0
#
# Transform back to frequency spectrum
#
j=complex(real=0, imaginary=1)
enh_spec = est_mag_spec*exp(j*phase_spec)
#
# Transform back to time domain
#
install.packages("pracma")
library("pracma")
enh_time=fft(enh_spec[,2], inverse=TRUE)
I hope that there is anyone with an idea on how to process these complex numbers. Maybe I have made an mistake earlier on in the processing method, but I have checked it multiple times and it seems quite solid to me. It is the (one but last) last step of the process and really hoping to obtain a nice time domain signal after inverse Fourier transforming.
An essential troubleshooting aid when transforming data using Fourier Transform is the notion that you can do a fft then take that data and do an inverse fft and get back your original data ... I suggest you get comfortable doing this with toy input time domain data ... lets say a 1 Khz audio wave which is your time domain data ... send it into a fft call which will return back an array of its frequency domain representation ... without doing anything with that data send it into an inverse fft ( ifft ) ... the data returned will the your original 1 Khz audio wave ... do that now to gain an appreciation of its power and use this trick on your project to confirm you are in the ball park ... Alternatively if you begin with frequency domain data you can also do this ...
freq domain data -> ifft -> time domain data -> fft -> same freq domain data
or
time domain -> fft -> freq domain -> ifft -> same time domain data
see more details here Get frequency with highest amplitude from FFT
This is your problem:
phase_spec=atan(Im(fft_frames[["S"]])/Re(fft_frames[["S"]]))
Here you compute an angle in a half circle, mapping the other half to the first. That is, you are loosing information.
Many languages have a function to obtain the phase of a complex value, for example in MATLAB it is angle, and in Python numpy.angle.
Alternatively, use the atan2 function, which exists in every single language I’ve ever used, except in NumPy they decided to call it arctan2. It computes the four-quadrant arctangent by taking the two components as separate values. That is, atan(y/x) is the same as atan2(y,x) if the result is in the first two quadrants.
I presume you can do
phase_spec=atan2(Im(fft_frames[["S"]]), Re(fft_frames[["S"]]))
I am testing a temperature sensor for a project. i found that there exist a variance between the expected and measured value. As the difference is non -linear over e temperature range i cant simply add an offset . Is there a way i can do a kind of offset to the acquired data ?
UPDATE
I have a commercial heater element which heat up to a set temperature(i named this temperature as expected). On the other side i have a temp sensor (my proj)which measure the temperature of the heater (here i named it as measured).
I noticed the difference between the measured and expected which i would like to compensate so that measured will be close to the expected value.
Example
If my sensor measured 73.3 it should be process by some means(mathematically or otherwise)so that it will show that it is close to 70.25.
Hope this clears thing a little.
Measured Expected
30.5 30.15
41.4 40.29
52.2 50.31
62.8 60.79
73.3 70.28
83 79.7
94 90.39
104.3 99.97
114.8 109.81
Thank you for your time.
You are interested in describing deviation one variable from the other. What you are looking for is function
g( x) = f( x) - x
which returns approximation, a prediction, what number to add to x to get y data based on real x input. You need the prediction of y based on observed x values first, the f(x). This is what you can get from doing a regression:
x = MeasuredExpected ( what you have estimated, and I assume
you will know this value)
y = MeasuredReal ( what have been actually observed instead of x)
f( x) = MeasuredReal( estimated) = alfa*x + beta + e
In the simplest case of just one variable you don't even have to include special tools for this. The coefficients of equation are equal to:
alfa = covariance( MeasuredExpected, MeasuredReal) / variance( MeasuredExpected)
beta = average( MeasuredReal) - alfa * average( MeasuredExpected)
so for each expected measured x you can now state that the most probable value of real measured is:
f( x) = MeasuredReal( expected) = alfa*x + beta (under assumption that error
is normally distributed, iid)
So you have to add
g( x) = f( x) - x = ( alfa -1)*x + beta
to account for the difference that you have observed between your usual Expected and Measured.
Maybe you could use a data sample in order to do a regression analysis on the variation and use the regression function as an offset function.
http://en.wikipedia.org/wiki/Regression_analysis
You can create a calibration lookup table (LUT).
The error in the sensor reading is not linear over the entire range of the sensor, but you can divide the range up into a number of sub-ranges for which the error within the sub-range is nearly linear. Then you calibrate the sensor by taking a reading in each sub-range and calculating the offset error for each sub-range. Store the offset for each sub-range in an array to create a calibration lookup table.
Once the calibration table is known, you can correct a measurement by performing a table lookup for the proper offset. Use the actual measured value to determine the index into the array from which to get the proper offset.
The sub-ranges don't need to be same-sized although that should make it easy to calculate the proper table index for any measurement. (If the sub-ranges are not same-sized then you could use a multidimensional array (matrix) and store not only the offset but also the beginning or end point of each sub-range. Then you would scan through the begin-points to determine the proper table index for any measurement.)
You can make the correction more accurate by dividing into smaller sub-ranges and creating a larger calibration lookup table. Or you may be able to interpolate between two table entries to get a more accurate offset.
I have a stream of data that trends over time. How do I determine the rate of change using C#?
It's been a long time since calculus class, but now is the first time I actually need it (in 15 years). Now when I search for the term 'derivatives' I get financial stuff, and other math things I don't think I really need.
Mind pointing me in the right direction?
If you want something more sophisticated that smooths the data, you should look into a a digital filter algorithm. It's not hard to implement if you can cut through the engineering jargon. The classic method is Savitzky-Golay
If you have the last n samples stored in an array y and each sample is equally spaced in time, then you can calculate the derivative using something like this:
deriv = 0
coefficient = (1,-8,0,8,-1)
N = 5 # points
h = 1 # second
for i range(0,N):
deriv += y[i] * coefficient[i]
deriv /= (12 * h)
This example happens to be a N=5 filter of "3/4 (cubic/quartic)" filter. The bigger N, the more points it is averaging and the smoother it will be, but also the latency will be higher. You'll have to wait N/2 points to get the derivative at time "now".
For more coefficients, look here at the Appendix
https://en.wikipedia.org/wiki/Savitzky%E2%80%93Golay_filter
You need both the data value V and the corresponding time T, at least for the latest data point and the one before that. The rate of change can then be approximated with Eulers backward formula, which translates into
dvdt = (V_now - V_a_moment_ago) / (T_now - T_a_moment_ago);
in C#.
Rate of change is calculated as follows
Calculate a delta such as "price minus - price 20 days ago"
Calculate rate of change such as "delta / price 99 days ago"
Total rate of change, i.e. (new_value - original_value)/time?
My question has to do with the physical meaning of the results of doing a spectral analysis of a signal, or of throwing the signal into an FFT and interpreting what comes out using a suitable numerical package,
Specifically:
take a signal, say a time-varying voltage v(t)
throw it into an FFT (you get back a sequence of complex numbers)
now take the modulus (abs) and square the result, i.e. |fft(v)|^2.
So you now have real numbers on the y axis -- shall I call these spectral coefficients?
using the sampling resolution, you follow a cookbook recipe and associate the spectral coefficients to frequencies.
AT THIS POINT, you have a frequency spectrum g(w) with frequency on the x axis, but WHAT PHYSICAL UNITS on the y axis?
My understanding is that this frequency spectrum shows how much of the various frequencies are present in the voltage signal -- they are spectral coefficients in the sense that they are the coefficients of the sines and cosines of the various frequencies required to reconstitute the original signal.
So the first question is, what are the UNITS of these spectral coefficients?
The reason this matters is that spectral coefficients can be tiny and enormous, so I want to use a dB scale to represent them.
But to do that, I have to make a choice:
Either I use the 20log10 dB conversion, corresponding to a field measurement, like voltage.
Or I use the 10log10 dB conversion, corresponding to an energy measurement, like power.
Which scaling I use depends on what the units are.
Any light shed on this would be greatly appreciated!
take a signal, a time-varying voltage v(t)
units are V, values are real.
throw it into an FFT -- ok, you get back a sequence of complex numbers
units are still V, values are complex ( not V/Hz - the FFT a DC signal becomes a point at the DC level, not an dirac delta function zooming off to infinity )
now take the modulus (abs)
units are still V, values are real - magnitude of signal components
and square the result, i.e. |fft(v)|^2
units are now V2, values are real - square of magnitudes of signal components
shall I call these spectral coefficients?
It's closer to an power density rather than usual use of spectral coefficient. If your sink is a perfect resistor, it will be power, but if your sink is frequency dependent it's "the square of the magnitude of the FFT of the input voltage".
AT THIS POINT, you have a frequency spectrum g(w): frequency on the x axis, and... WHAT PHYSICAL UNITS on the y axis?
Units are V2
The other reason the units matter is that the spectral coefficients can be tiny and enormous, so I want to use a dB scale to represent them. But to do that, I have to make a choice: do I use the 20log10 dB conversion (corresponding to a field measurement, like voltage)? Or do I use the 10log10 dB conversion (corresponding to an energy measurement, like power)?
You've already squared the voltage values, giving equivalent power into a perfect 1 Ohm resistor, so use 10log10.
log(x2) is 2 log(x), so 20log10 |fft(v)| = 10log10 ( |fft(v)|2), so alternatively if you did not square the values you could use 20log10.
The y axis is complex (as opposed to real). The magnitude is the amplitude of the original signal in whatever units your original samples were in. The angle is the phase of that frequency component.
Here's what I've been able to come up with so far:
The y-axis seems likely to be in units of [Energy / Hz] !?
Here's how I'm deriving this (feedback welcomed!):
the signal v(t) is in volts
so after taking the Fourier integral: integral e^iwt v(t) dt , we should have units of [volts*seconds], or [volts/Hz] (e^iwt is unitless)
taking the magnitude squared should then give units of [volts^2 * s^2], or [v^2 * s/Hz]
we know Power is proportional to volts ^2, so this gets us to [power * s / Hz]
but Power is the time-rate of change in energy, i.e. power = energy/s, so we can also write Energy = power * s
this leaves us with the candidate conclusion [Energy/Hz]. (Joules/Hz ?!)
... which suggests the meaning "Energy content per Hz", and suggests as a use integrating frequency bands and seeing the energy content... which would be very nice if it were true...
Continuing... assuming the above is correct, then we are dealing with an Energy measurement, so this would suggest using 10log10 conversion to get into dB scale, instead of 20log10...
...
The power into a resistor is v^2/R watts. The power of a signal x(t) is an abstraction of the power into a 1 Ohm resistor. Therefore, the power of a signal x(t) is x^2 (also called instantaneous power), regardless of the physical units of x(t).
For example, if x(t) is temperature, and the units of x(t) are degrees C, then the units for the power x^2 of x(t) are C^2, certainly not watts.
If you take the Fourier transform of x(t) to get X(jw), then the units of X(jw) are C*sec or C/Hz (according to the Fourier transform integral). If you use (abs(X(jw)))^2, then the units are C^2*sec^2=C^2*sec/Hz. Since power units are C^2, and energy units are C^2*sec, then abs(X(jw)))^2 gives the energy spectral density, say E/Hz. This is consistent with Parseval's theorem, where the energy of x(t) is given by (1/2*pi) times the integral of abs(X(jw)))^2 with respect to w, i.e., (1/2*pi)*int(abs(X(jw)))^2*dw) > (1/2*pi)*(C^2*sec^2)*2*pi*Hz > (1/2*pi)*(C^2*sec/Hz)*2*pi*Hz > E.
Conversion to a dB (log scale) scale does not change the units.
If you take the FFT of samples of x(t), written as x(n), to get X(k), then the result X(k) is an estimate of the Fourier series coefficients of a periodic function, where one period over T0 seconds is the segment of x(t) that was sampled. If the units of x(t) are degrees C, then the units of X(k) are also degrees C. The units of abs(X(k))^2 are C^2, which are the units of power. Thus, a plot of abs(X(k))^2 versus frequency shows the power spectrum (not power spectral density) of x(n), which is an estimate the power of a set of frequency components of x(t) at the frequencies k/T0 Hz.
Well, late answer I know. But I just had cause to do something like this, in a different context. My raw data was latency values for transactions against a storage unit - I resampled it to a 1ms time interval. So original data y was "latency, in microseconds." I had 2^18 = 262144 original data points, on 1ms time steps.
After I did the FFT, I got a 0th component (DC) such that the following held:
FFT[0] = 262144*(average of all input data).
So it looks to me like FFT[0] is N*(average of input data). That sort of makes sense - every single data point possesses that DC average as part of what it is, so you add 'em all up.
If you look at the definition of the FFT that makes sense too. All of the other components would involve sine and cosine terms too, but really the FFT is just a summation. The average is just the only one that happens to be present in all points equally, because you have cos(0) = 1.