Using FFT in R to Determine Density Function for IID Sum - r

The goal is to compute the density function of a sum of n IID random variables via the density function of one of these random variables by:
Transforming the density function into the characteristic function via fft
Raise the characteristic function to the n
Transform the resulting characteristic function into the density function of interest via fft(inverse=TRUE)
The below is my naive attempt at this:
sum_of_n <- function(density, n, xstart, xend, power_of_2)
{
x <- seq(from=xstart, to=xend, by=(xend-xstart)/(2^power_of_2-1))
y <- density(x)
fft_y <- fft(y)
fft_sum_of_y <- (fft_y ^ n)
sum_of_y <- Re(fft(fft_sum_of_y, inverse=TRUE))
return(sum_of_y)
}
In the above, density is an arbitrary density function: for example
density <- function(x){return(dgamma(x = x, shape = 2, rate = 1))}
n indicates the number of IID random variables being summed. xstart and xend are the start and end of the approximate support of the random variable. power_of_2 is the power of 2 length for the numeric vectors used. As I understand things, lengths of powers of two increase the efficiency of the fft algorithm.
I understand at least partially why the above does not work as intended in general. Firstly, the values themselves will not be scaled correctly, as fft(inverse=TRUE) does not normalize by default. However, I find that the values are still not correct when I divide by the length of the vector i.e.
sum_of_y <- sum_of_y / length(sum_of_y)
which based on my admittedly limited understanding of fft is the normalizing calculation. Secondly, the resulting vector will be out of phase due to (someone correct me on this if I am wrong) the shifting of the zero frequency that occurs when fft is performed. I have tried to use, for example, pracma's fftshift and ifftshift, but they do not appear to address this problem correctly. For symmetric distributions e.g. normal, this is not difficult to address since the phase shift is typically exactly half, so that an operation like
sum_of_y <- c(sum_of_y[(length(y)/2+1):length(y)], sum_of_y[1:(length(y)/2)])
works as a correction. However, for asymmetric distributions like the gamma distribution above this fails.
In conclusion, are there adjustments to the code above that will result in an appropriately scaled and appropriately shifted final density function for the IID sum?

Related

How to calculate the volume under a surface defined by discrete data?

I need to determine the volume beneath a series of surfaces represented by discrete data points. In my data, each sample is stored as a separate data frame within a list of data frames. Here is some (small) example data:
df1 <- data.frame(x=c(2,2,2,3,3,3,4,4,4,5,5,5,6,6,6),
y=c(1,2,3,1,2,3,1,2,3,1,2,3,1,2,3),
z=c(0,2,0,4,6,7,3,2,1,2,7,8,9,4,2))
df2 <- data.frame(x=c(2,2,2,3,3,3,4,4,4,5,5,5,6,6,6),
y=c(1,2,3,1,2,3,1,2,3,1,2,3,1,2,3),
z=c(1,1,2,3,5,6,2,1,3,3,8,9,8,3,1))
DF <- list(df1,df2)
Answers to similar questions are either in other languages (matlab, python), or the answers do not contain useable script to address the problem (as here). I can think of two acceptable ways to estimate the volume beneath each surface: 1) write out a discretized version of simpson's rule as a function in R that is applied across the list of data frames (DF); 2) calculate an arbitrary relationship between x, y, and z and use multivariate numerical integration to find the volume under the surface (with functions like simpson2d / quad2d in the package pracma or adaptIntegrate in cubature).
Regarding the first approach, the formula for the composite simpson's rule (that I would like to use) is here, but due to its complexity, I have been unsuccessful in writing a working double summation function. In this expression, I(lambda(em) lambda(ex)) is equal to z in the above datasets at each x,y grid point, and Delta(em) and Delta(ex) represent the interval between x and y points.
The second approach would essentially extend the approach found here to multivariate spline fits and then pass the predicted z values as a function for integration. Here's what I have tried so far for this approach:
require(pracma)
df1.loess <- loess(z ~ x + y, data=DF[[1]])
mod.fun <- function(x,y) predict(df1.loess, newdata=x,y)
simpson2d(mod.fun, x=c(2,6), y=c(1,3))
But this does not yield useful results.
In reality, I have a list of almost 100 data frames for individual samples, so I really need to be able to express the solution as a series of lapply functions that automate these calculations across all data frames in the list. An example looks something like this:
require(akima)
DF.splines <- lapply(DF, function(x,y,z) interp(x = "x", y = "y", z = "z",
linear=F, nx=4, ny=2))
Unfortunately, this produces an exception for missing values and Infs. I'm extremely open to any suggestions for how to successfully implement one of these strategies, or to utilize a different (simpler?) approach. Could a kriging function (like km in the DiceKriging package) produce a better fit that could be passed on for numerical integration?
I am assuming that the volume surface mesh is defined by connecting points via straight lines. Then you can find the volume beneath that surface via
triangular tessellation of the (x,y) grid into triangles T_i with area A_i
finding the corresponding z values Z_i for each of the triangles T_i
calculating the volume V_i of the truncated prisms (defined by T_i and Z_i) via V_i=A_i*sum(Z_i)/3 (see https://en.wikipedia.org/wiki/Prism_(geometry) and https://math.stackexchange.com/questions/2371139/volume-of-truncated-prism)
summing up all truncated prism volumes V_i
Keep in mind, however, that the volume does depend on your tessellation and that the tessellation is not unique. But your problem is not fully defined in the sense that it does not describe how one should interpolate between points. So any approach to calculate a volume will have to make additional assumptions.
Going back to my solution approach, points 1 and 2 can be achieved via the geometry package.
Here some code
library(geometry)
getVolume=function(df) {
#find triangular tesselation of (x,y) grid
res=delaunayn(as.matrix(df[,-3]),full=TRUE,options="Qz")
#calulates sum of truncated prism volumes
sum(mapply(function(triPoints,A) A/3*sum(df[triPoints,"z"]),
split.data.frame(res$tri,seq_along(res$areas)),
res$areas))
}
sapply(DF,getVolume)
#[1] 32.50000 30.33333
Since it's hard to check whether the results are consistent, here a simple example where we know the right answer. It's a cube with side length 2 where we have cut out a wedge along the x axis. The cut-out region is 1/4 of the total volume.
cutOutCube=expand.grid(c(0,1,2),c(0,1,2))
colnames(cutOutCube)=c("x","y")
cutOutCube$z=ifelse(cutOutCube$x==1,1,2)
sapply(list(cutOutCube),getVolume)
#[1] 6
That's correct since 2^3*(1-1/4)=6.
Another sanity check can be performed by calculating the "complement" of the volume w.r.t. to a simple cuboid where all z values are set to the max z value (in your case max(z)=9 in both cases). The simple cuboid volumes are 72 for both of your cases. Not let's define the complement surfaces and sum up volume and complement volume
df1c=df1
df1c$z=max(df1c$z)-df1c$z
df2c=df2
df2c$z=max(df2c$z)-df2c$z
DFc=list(df1c,df2c)
sapply(DFc,getVolume)+sapply(DF,getVolume)
#[1] 72 72
So volume and complement volume give the right simple cuboid volume in both cases.
You could approximate the surface through a "barycentric Lagrangian" approach as implemented in function barylag2d in the pracma package. Then, to avoid any vectorization problems, apply the Gaussian quadrature rules explicitly.
library(pracma)
df1 <- data.frame(x=c(2,2,2,3,3,3,4,4,4,5,5,5,6,6,6),
y=c(1,2,3,1,2,3,1,2,3,1,2,3,1,2,3),
z=c(0,2,0,4,6,7,3,2,1,2,7,8,9,4,2))
# Define the nodes in x- and y-direction
xn <- df1$x[c(1,4,7,10,13)]
yn <- df1$y[1:3]
# Define the matrix representing the function
m1 <- matrix(df1$z, nrow=5, byrow=TRUE)
f <- function(x, y)
c(pracma::barylag2d(m1, xn, yn, x, y))
# 32 nodes in integration intervals
n <- 32
xa <- 2; xb <- 6; ya <- 1; yb <- 3
# Apply quadrature rules explicitely
cx <- gaussLegendre(n, xa, xb)
x <- cx$x; wx <- cx$w
cy <- gaussLegendre(n, ya, yb)
y <- cy$x; wy <- cy$w
# Sum weights * values over all nodes
I <- 0
for (i in 1:n) {
for (j in 1:n) {
I <- I + wx[i] * wy[j] * f(x[i], y[j])
}
}
I # 40.37037
The integral value of 40 seems reasonable given the data. simpson2d or quad2d will not work in this setting.
You may try whether adaptIntegrate will work with the so defined function f.

Points uniformly distributed on unit disk (2D)

I am trying to generate 10,000 points from the uniform distribution on the unit disk and plot these points.
The method I am using has three steps. The first step is generating the magnitude of the point x. This point has cdf F(x) = x^2 min(x) = 0 and max(x) = 1. The second step involves generating a 2 dimensional vector (which I will call y) from the multivariate normal distribution with mu being the zero vector and sigma being the 2x2 identity matrix - MVN(0,I). Last I normalize the vector y to have length x. I have tried to code the solution in R but I do not think my answer is correct. I would really appreciate if I could be pointed in the right direction.
u = runif(10000)
x = u^2
y = mvrnorm(10000, mu=rep(0,2), Sigma=diag(2))
y_norm = (x*y)/sqrt(sum(y^2))
plot(y_norm, asp = 1)
I used the MASS package for mvrnorm. Also I have included the plot that I ended up with:
You need to compute the length of each of the rows in your y matrix, you are getting the square root of the sum of all the numbers in y, which is just scaling your multinomial by a constant. Also, you need x to be sqrt(u) rather than u^2 - this code normalises each row by its length and users sqrt(u) scaling and it looks nice and uniform:
plot(sqrt(u)*y/sqrt(y[,1]^2+y[,2]^2))
There are better ways of making uniform points on a disc, unless this is just an exercise to do it this way...

Change specific values with mean based on neighbors

I have dataset with values from 100 to 200, but there are a few spikes in data.
I don't want to smooth the whole dataset with rollmean or rollaplly.
I want to work it in that way:
find these spikes with condition (value > 300)
replace these too big values with mean/median that had been
calculated from 10 near neighbors values.
Example in pseudo-code:
data[n] = spike
data[n] = mean(from data[n-5] to data[n+5])
It's like using window function not on the whole data set, only on certain points in data.
Thank you in advance
I like this question. A typical moving average/k-nearest neighbourhood estimation. A nonnparametric approach. The following should work.
foo <- function(x, thresh = 300, h = 5, window.fun = mean) {
spikes.loc <- which(x > thresh)
low.bound <- spikes - h
up.bound <- spikes + h
N <- length(spikes.loc)
x.hat <- x
for (i in 1:N) x.hat[spikes.loc[i]] <- window.fun(x[low.bound[i]:up.bound[i]])
return(x.hat)
}
This function takes your original observations vector x, threshold, a window size (a smoothing parameter) as well as a user-specified window function. The returned value is the vector smoothed data. It only differs from original data at spikes points. Common choice of a window function is density function, so you end up with weighted average of all neighbouring data.
Please note, I am assuming your data are evenly spaced, so a simple index x[i-h] : x[i+h] gives a reasonable neighbourhood. In more general setting, a window is based on euclidean distance, but will naively costs O(N*N), where N is the number of observations, which is expensive.
In R, there are built-in nonparametric estimation / smoothing tools. The most basic one is kernel smoothing, a generalization of moving average. It uses FFT algorithm for fast computation at O(N log(N)) costs. Please see ?ksmooth. More advanced are KernSmooth and sm packages.

Computing integral of a line plot in R

I have two positive-valued vectors x,y of the same length in R. Using plot(x, y, "l",...), gives me a continuous line plot in 2 dimensions out of my finite vectors x and y. Is there a way to compute a definite integral over some range of this line plot in R?
edit1: I've looked into the integrate function in R. I'm not sure however how to make a function out of two vectors to pass to it, as my vectors are both finite.
edit2: For some more background, The length of x and y ~ 10,000. I've written a function to find periods, [xi, xj], of abnormalities in the data I'm observing. For each of these abnormalities, I've used plot to see what's going on in these snippets of my data. Now i need to compute statistics concerning the values of the integrals in these abnormal periods, so I'm trying to get as accurate as a number as possible to match with my graphs. X is a time variable, and I've taken very fine intervals of time.
You can do the integration with integrate(). To create a function out of your vectors x and y, you need to interpolate between the values. approxfun() does exactly that.
integrate takes a function and two bounds.
approxfun takes two vectors x and y just like those you have.
So my solution would be :
integrate(approxfun(x,y), range(x)[1], range(x)[2])
The approxfun function will take 2 vectors and return a function that gives the linear interpolation between the points. This can then be passed to functions like integrate. The splinefun function will also do interpolation, but based on a spline rather than piecewise linear.
In the piecewise linear case the integral will just be the sum of the trapezoids, it may be faster/simpler to just sum the areas of the trapezoids (the width, difference in x's
I landed here much later. But for future visitors,
here is some code for the suggestion from
Greg Snow's answer, for piece-wise linear functions:
line_integral <- function(x, y) {
dx <- diff(x)
end <- length(y)
my <- (y[1:(end - 1)] + y[2:end]) / 2
sum(dx *my)
}
# example
x <- c(0, 2, 3, 4, 5, 5, 6)
y <- c(0, 0, 1,-2,-1, 0, 0)
plot(x,y,"l")
line_integral(x,y)

Combining two normal random variables

suppose I have the following 2 random variables :
X where mean = 6 and stdev = 3.5
Y where mean = -42 and stdev = 5
I would like to create a new random variable Z based on the first two and knowing that : X happens 90% of the time and Y happens 10% of the time.
It is easy to calculate the mean for Z : 0.9 * 6 + 0.1 * -42 = 1.2
But is it possible to generate random values for Z in a single function?
Of course, I could do something along those lines :
if (randIntBetween(1,10) > 1)
GenerateRandomNormalValue(6, 3.5);
else
GenerateRandomNormalValue(-42, 5);
But I would really like to have a single function that would act as a probability density function for such a random variable (Z) that is not necessary normal.
sorry for the crappy pseudo-code
Thanks for your help!
Edit : here would be one concrete interrogation :
Let's say we add the result of 5 consecutives values from Z. What would be the probability of ending with a number higher than 10?
But I would really like to have a
single function that would act as a
probability density function for such
a random variable (Z) that is not
necessary normal.
Okay, if you want the density, here it is:
rho = 0.9 * density_of_x + 0.1 * density_of_y
But you cannot sample from this density if you don't 1) compute its CDF (cumbersome, but not infeasible) 2) invert it (you will need a numerical solver for this). Or you can do rejection sampling (or variants, eg. importance sampling). This is costly, and cumbersome to get right.
So you should go for the "if" statement (ie. call the generator 3 times), except if you have a very strong reason not to (using quasi-random sequences for instance).
If a random variable is denoted x=(mean,stdev) then the following algebra applies
number * x = ( number*mean, number*stdev )
x1 + x2 = ( mean1+mean2, sqrt(stdev1^2+stdev2^2) )
so for the case of X = (mx,sx), Y= (my,sy) the linear combination is
Z = w1*X + w2*Y = (w1*mx,w1*sx) + (w2*my,w2*sy) =
( w1*mx+w2*my, sqrt( (w1*sx)^2+(w2*sy)^2 ) ) =
( 1.2, 3.19 )
link: Normal Distribution look for Miscellaneous section, item 1.
PS. Sorry for the wierd notation. The new standard deviation is calculated by something similar to the pythagorian theorem. It is the square root of the sum of squares.
This is the form of the distribution:
ListPlot[BinCounts[Table[If[RandomReal[] < .9,
RandomReal[NormalDistribution[6, 3.5]],
RandomReal[NormalDistribution[-42, 5]]], {1000000}], {-60, 20, .1}],
PlotRange -> Full, DataRange -> {-60, 20}]
It is NOT Normal, as you are not adding Normal variables, but just choosing one or the other with certain probability.
Edit
This is the curve for adding five vars with this distribution:
The upper and lower peaks represent taking one of the distributions alone, and the middle peak accounts for the mixing.
The most straightforward and generically applicable solution is to simulate the problem:
Run the piecewise function you have 1,000,000 (just a high number) of times, generate a histogram of the results (by splitting them into bins, and divide the count for each bin by your N (1,000,000 in my example). This will leave you with an approximation for the PDF of Z at every given bin.
Lots of unknowns here, but essentially you just wish to add the two (or more) probability functions to one another.
For any given probability function you could calculate a random number with that density by calculating the area under the probability curve (the integral) and then generating a random number between 0 and that area. Then move along the curve until the area is equal to your random number and use that as your value.
This process can then be generalized to any function (or sum of two or more functions).
Elaboration:
If you have a distribution function f(x) which ranges from 0 to 1. You could calculate a random number based on the distribution by calculating the integral of f(x) from 0 to 1, giving you the area under the curve, lets call it A.
Now, you generate a random number between 0 and A, let's call that number, r. Now you need to find a value t, such that the integral of f(x) from 0 to t is equal to r. t is your random number.
This process can be used for any probability density function f(x). Including the sum of two (or more) probability density functions.
I'm not sure what your functions look like, so not sure if you are able to calculate analytic solutions for all this, but worse case scenario, you could use numeric techniques to approximate the effect.

Resources