Extract Fourier coeffients from fft() in R - r

I need to derive Fourier time series coefficients associated to (i-1)^th harmonic from fft() function in R, some idea?
For instance
Adding these concepts we get the general form of the Fourier Series:
f(t)=a_0+∑_k a_k×sin(kwt+ρ_k)
where a_0 is the DC component, w=2πf_0, where f_0 is the fundamental frequency of the original wave.
Each wave component a_k×sin(kwt+ρ_k) is also called a harmonic.
If I fixed the number of harmonics to 2, I would like to derive a_0,a_1,a_2 from fft()

It's a very general question. You can look here: Fourier Transform: A R Tutorial, for a start.

Related

How to calculate "compound" Markov transition matrix in Stata or R?

By "compound" I mean the transition matrix satisfies the Markov property,namely I have two columns s_t and s_t+k that represent state of each individual in two period t and t+k respectively.
What I want is to find the matrix M that
s_t+k = M^k * s_t
so that matrix M satisfies the Markov property.
My default working language is Stata, in which commands like tab, svy:tab or xttran can generate one period transition matrices, but these matrices do not necessarily satisfy the Markov property. So I wonder how to achieve my goal in Stata or other common language like R or Python.
PS:This problem raise from a paper which research many countries' GDP_per_capita transition dynamics from 1960 to 2010. Say, at the beginning of each decades, we group all countries into 5 groups (from 1:extremely poor country to 5: high-income country), so we have a distribution of countries with 5 states. It's easy if I simply estimate the decade-to-decade transition matrix using markovchain class. However, the author claim that (page11, footnote4)
“The decade average transition matrix is estimated based on
the 5-decade transition matrices from 1960 to 2010 by employing
a numerical optimization program. Instead of taking the simple average
for the five transition matrices (which suffers from Jensen’s
Inequality), we estimate a transition matrix that can give us an exact
5 decade duration transition matrix (entry in 1960 and exit in 2010)
by taking its power 5.”
In R you can use the markovchain package to get the transition matrix that satisfies markov property. You can use the following example code...
library(markovchain)
data(rain)
mysequence<-rain$rain
createSequenceMatrix(mysequence)
myFit<-markovchainFit(data=mysequence,,method="bootstrap",nboot=5, name="Bootstrap Mc")
myFit
The myFit is your estimated transition matrix. This example uses the Alofi rainfall dataset.
The multiplication of matrix in R is not * but %*%.
I wrote a simple function in R to solve the problem.
trans_mat = function(k,s_t,M){
for(i in 1:k){
M = M % * % M
}
return(M%*%s_t)}
now, what you need to do is to type in k(how long the period you want),s_t(the original state), and M(markov property).
s_t+k = trans_mat(k,s_t,M)
The markovchain package directly implements the power for any markovchain object:
require(markovchain)
#creating the MC
myMatr<-matrix(data=c(0.2,0.8,.6,.4),ncol=2,byrow=TRUE)
myMc<-as(myMatr,"markovchain")
#5th power of the MC
myMc5<-myMc^5
myMc5

Discrepancy in Cubic Spline Interpolation, R & matlab

I am trying to replicate the spline() function in matlab using the spline() function in R's splinefun {stat}s package, without having full access to matlab (I don't have a licence for it). I am able to input all of the necessary data into R that would be present in matlab, but my spline output is different than matlab's by an average of .0036 (maxdif is .0342, mindif is -.0056, stdev is .0094). My main question is, how does matlab's formula compare to R's, and is that where my calculation discrepancy might come from?
The first part of my code is feeding the excel spreadsheet into R, then calculating the necessary variables to get tau and quick delta. After this, I run the spline calculation and then rotate the output for the purposes of exporting back into excel. Below is the essential script, plus some data to try out to see if there is something flawed in my calculation. I use spline(natural), as it returns the closest values to matlab's model.
#establishing what tau is for quick Delta calculation
today<-Sys.Date()
month<-as.Date(5/1/2016)
difday<-difftime(month,today,units=c("days"))
Tau<-as.numeric((month-today)/365)
Pu<-as.numeric(1.94)
Vol<-as.numeric(.4261)
#Pf is the representation of my fixed strike prices, the points used for interpolation
Pf<-c(Pu-.3,Pu-.25,Pu-.2,Pu-.1,Pu,Pu+.1,Pu+.2,Pu+.25,Pu+.3)
qDtable<-data.frame(matrix(ncol=length(Pf),nrow=length(month)))
colnames(qDtable)<-c(Pf)
rownames(qDtable)<-format.Date(month)
#my quick Delta calculation & table as a result
qD<-data.frame(pnorm(log(Pf/Pu)/(Vol*sqrt(Tau))))
Qd<-t(qD[1:24,1])
qDtable[1,]=c(Qd)
#setting up for spline interpolation
qDpoint<-as.numeric(qDtable[1,1:24])
ncsibyPf<-data.frame(matrix(ncol=length(Pf),nrow=length(month)))
colnames(ncsibyPf)<-Pf
rownames(ncsibyPf)<-format.Date(month)
qDvol<-data.frame(matrix(ncol=14,nrow=2)
colnames(qDvol)<-c("",0,.05,.1,.2,.3,.4,.5,.6,.7,.8,.9,.95,1)
rownames(qDvol)<-format.Date(month)
qDvol[2,2:14]<-c(.59612,.51112,.46112,.45612,.44612,.42612,.42612,.42612,.42612,.42612,.42612,.42612,.42612)
#x is the quick Vol point
x<-as.numeric(qDvol[1,2:14])
#y is the vol at the quick Vol point
y<-as.numeric(qDvol[2,2:14])
ncsivol<-data.frame(spline(x,y,xout=qDpoint,method="natural"))
nroutput<-t(ncsivol[1:24,2])
ncsibyPf[1,]=c(nroutput)
The essential data points for this spline run are all included (I think), and everything should line up correctly. Thank you for your help ahead of time!

Wavelet reconstruction of time series

I'm trying to reconstruct the original time series from a Morlet's wavelet transform. I'm working in R, package Rwave, function cwt. The result of this function is a matrix of n*m (n=period, m=time) containing complex values.
To reconstruct the signal I used the formula (11) in Torrence & Compo classic text, but the result has nothing to do with the original signal. I'm specially concerned with the division between the real part of the wavelet transform and the scale, this step distorts completely the result. On the other hand, if I just sum the real parts over all the scales, the result is quite similar to the original time series, but with slightly wider values (the original series ranges~ [-0.2, 0.5], the reconstructed series ranges ~ [-0.4,0.7]).
I'm wondering if someone could tell of some practical procedure, formula or algorithm to reconstruct the original time series. I've already read the papers of Torrence and Compo (1998), Farge (1992) and other books, all with different formulas, but no one really help me.
I have been working on this topic currently, using the same paper. I show you code using an example dataset, detailing how I implemented the procedure of wavelet decomposition and reconstruction.
# Lets first write a function for Wavelet decomposition as in formula (1):
mo<-function(t,trans=0,omega=6,j=0){
dial<-2*2^(j*.125)
sqrt((1/dial))*pi^(-1/4)*exp(1i*omega*((t-trans)/dial))*exp(-((t-trans)/dial)^2/2)
}
# An example time series data:
y<-as.numeric(LakeHuron)
From my experience, for correct reconstruction you should do two things: first subject the mean to get a zero-mean dataset. I then increase the maximal scale. I mostly use 110 (although the formula in the Torrence and Compo suggests 71)
# subtract mean from data:
y.m<-mean(y)
y.madj<-y-y.m
# increase the scale:
J<-110
wt<-matrix(rep(NA,(length(y.madj))*(J+1)),ncol=(J+1))
# Wavelet decomposition:
for(j in 0:J){
for(k in 1:length(y.madj)){
wt[k,j+1]<-mo(t=1:(length(y.madj)),j=j,trans=k)%*%y.madj
}
}
#Extract the real part for the reconstruction:
wt.r<-Re(wt)
# Reconstruct as in formula (11):
dial<-2*2^(0:J*.125)
rec<-rep(NA,(length(y.madj)))
for(l in 1:(length(y.madj))){
rec[l]<-0.2144548*sum(wt.r[l,]/sqrt(dial))
}
rec<-rec+y.m
plot(y,type="l")
lines(rec,col=2)
As you can see in the plot, it looks like a perfect reconstruction:

Using Rs fft function

I'm currently trying to use the fft function in R to transform measured soil temperature at a certain depths so as to model soil temperatures and heat fluxes at different depths.
I wanted to clarify some points regarding the fft function in R as i'm currently experiencing problems implementing this procedure.
So I have a df containing the date and time and soil temperatures at 5cm (T5) depth for a period of several months. According to the literature, it is possible to simulate temperatures and heat fluxes at different depths based on a fast Fourier transform of the measured data.
So my first step was naturally DF$FFT = fft (DF$T5)
From which I receive a series of complex numbers (Cn) i.e. the respective real (an) and imaginary (bn) numbers.
According to the literature, I can then recreate the T5 data with a formula based on outputs from the aforementioned fft.
*T_(0,t )= meanT + ∑ (An sin⁡〖nωt+φ〗) ̅
NB the summed term is summed between n=1 and M, the highest harmonic
where T o,t is the temperature at given time point, mean Temperature over the period, t is the time and...
An = (2/sqrt(N))*|Cn|
|Cn| = modulus of the complex number of the nth harmonic Mod (DF$FFT)
phi = arctan (an/bn) i.e. arctan (Re(DF$FFT)/Im(DF$FFT)
omega = (2*pi/N)
Unfortunately based on the output of the fft in R i cannot recreate the temperature values using the above formula. I realise i can recreate the data using
fft (fft(DF$T5), inverse = T)/length (DF$T5)
However i need to be able to do it with the above equation so as to use the terms from this equation to model temperatures at other depths. Could anyone lend a hand in where i may be going wrong with the procedure i have described above. For example the above procedure was implemented in paper where the fft function from Mathcad was used! I am not looking here for a quick fix solution to my problem, so i understand that more data and info would be handy if that were the case. What i am looking for though is a bit of guidance with e.g. any peculiarities of the R fft that i should be aware of.
If anyone could help in any way possible it would be most appreciated. Also if anyone needs more info regarding my problem please do ask
thanks a lot
Brad

approximation methods

I attached image:
(source: piccy.info)
So in this image there is a diagram of the function, which is defined on the given points.
For example on points x=1..N.
Another diagram, which was drawn as a semitransparent curve,
That is what I want to get from the original diagram,
i.e. I want to approximate the original function so that it becomes smooth.
Are there any methods for doing that?
I heard about least squares method, which can be used to approximate a function by straight line or by parabolic function. But I do not need to approximate by parabolic function.
I probably need to approximate it by trigonometric function.
So are there any methods for doing that?
And one idea, is it possible to use the Least squares method for this problem, if we can deduce it for trigonometric functions?
One more question!
If I use the discrete Fourier transform and think about the function as a sum of waves, so may be noise has special features by which we can define it and then we can set to zero the corresponding frequency and then perform inverse Fourier transform.
So if you think that it is possible, then what can you suggest in order to identify the frequency of noise?
Unfortunately many solutions here presented don't solve the problem and/or they are plain wrong.
There are many approaches and they are specifically built to solve conditions and requirements you must be aware of !
a) Approximation theory: If you have a very sharp defined function without errors (given by either definition or data) and you want to trace it exactly as possible, you are using
polynominal or rational approximation by Chebyshev or Legendre polynoms, meaning that you
approach the function by a polynom or, if periodical, by Fourier series.
b) Interpolation: If you have a function where some points (but not the whole curve!) are given and you need a function to get through this points, you can use several methods:
Newton-Gregory, Newton with divided differences, Lagrange, Hermite, Spline
c) Curve fitting: You have a function with given points and you want to draw a curve with a given (!) function which approximates the curve as closely as possible. There are linear
and nonlinear algorithms for this case.
Your drawing implicates:
It is not remotely like a mathematical function.
It is not sharply defined by data or function
You need to fit the curve, not some points.
What do you want and need is
d) Smoothing: Given a curve or datapoints with noise or rapidly changing elements, you only want to see the slow changes over time.
You can do that with LOESS as Jacob suggested (but I find that overkill, especially because
choosing a reasonable span needs some experience). For your problem, I simply recommend
the running average as suggested by Jim C.
http://en.wikipedia.org/wiki/Running_average
Sorry, cdonner and Orendorff, your proposals are well-minded, but completely wrong because you are using the right tools for the wrong solution.
These guys used a sixth polynominal to fit climate data and embarassed themselves completely.
http://scienceblogs.com/deltoid/2009/01/the_australians_war_on_science_32.php
http://network.nationalpost.com/np/blogs/fullcomment/archive/2008/10/20/lorne-gunter-thirty-years-of-warmer-temperatures-go-poof.aspx
Use loess in R (free).
E.g. here the loess function approximates a noisy sine curve.
(source: stowers-institute.org)
As you can see you can tweak the smoothness of your curve with span
Here's some sample R code from here:
Step-by-Step Procedure
Let's take a sine curve, add some
"noise" to it, and then see how the
loess "span" parameter affects the
look of the smoothed curve.
Create a sine curve and add some noise:
period <- 120 x <- 1:120 y <-
sin(2*pi*x/period) +
runif(length(x),-1,1)
Plot the points on this noisy sine curve:
plot(x,y, main="Sine Curve +
'Uniform' Noise") mtext("showing
loess smoothing (local regression
smoothing)")
Apply loess smoothing using the default span value of 0.75:
y.loess <- loess(y ~ x, span=0.75,
data.frame(x=x, y=y))
Compute loess smoothed values for all points along the curve:
y.predict <- predict(y.loess,
data.frame(x=x))
Plot the loess smoothed curve along with the points that were already
plotted:
lines(x,y.predict)
You could use a digital filter like a FIR filter. The simplest FIR filter is just a running average. For more sophisticated treatment look a something like a FFT.
This is called curve fitting. The best way to do this is to find a numeric library that can do it for you. Here is a page showing how to do this using scipy. The picture on that page shows what the code does:
(source: scipy.org)
Now it's only 4 lines of code, but the author doesn't explain it at all. I'll try to explain briefly here.
First you have to decide what form you want the answer to be. In this example the author wants a curve of the form
f(x) = p0 cos (2π/p1 x + p2) + p3 x
You might instead want the sum of several curves. That's OK; the formula is an input to the solver.
The goal of the example, then, is to find the constants p0 through p3 to complete the formula. scipy can find this array of four constants. All you need is an error function that scipy can use to see how close its guesses are to the actual sampled data points.
fitfunc = lambda p, x: p[0]*cos(2*pi/p[1]*x+p[2]) + p[3]*x # Target function
errfunc = lambda p: fitfunc(p, Tx) - tX # Distance to the target function
errfunc takes just one parameter: an array of length 4. It plugs those constants into the formula and calculates an array of values on the candidate curve, then subtracts the array of sampled data points tX. The result is an array of error values; presumably scipy will take the sum of the squares of these values.
Then just put some initial guesses in and scipy.optimize.leastsq crunches the numbers, trying to find a set of parameters p where the error is minimized.
p0 = [-15., 0.8, 0., -1.] # Initial guess for the parameters
p1, success = optimize.leastsq(errfunc, p0[:])
The result p1 is an array containing the four constants. success is 1, 2, 3, or 4 if ths solver actually found a solution. (If the errfunc is sufficiently crazy, the solver can fail.)
This looks like a polynomial approximation. You can play with polynoms in Excel ("Add Trendline" to a chart, select Polynomial, then increase the order to the level of approximation that you need). It shouldn't be too hard to find an algorithm/code for that.
Excel can show the equation that it came up with for the approximation, too.

Resources