I have a matrix I want to transform, such that every feature in the transformed dataset has mean of 0 and variance of 1.
I have tried to use the following code:
scale <- function(train, test)
{
trainmean <- mean(train)
trainstd <- sd(train)
xout <- test
for (i in 1:length(train[1,])) {
xout[,i] = xout[,i] - trainmean(i)
}
for (i in 1:lenght(train[1,])) {
xout[,i] = xout[,i]/trainstd[i]
}
}
invisible(xout)
normalized <- scale(train, test)
This is, however, not working for me. Am I on the right track?
Edit: I am very new to the syntax!
You can use the built-in scale function for this.
Here's an example, where we fill a matrix with random uniform variates between 0 and 1 and centre and scale them to have 0 mean and unit standard deviation:
m <- matrix(runif(1000), ncol=4)
m_scl <- scale(m)
Confirm that the column means are 0 (within tolerance) and their standard deviations are 1:
colMeans(m_scl)
# [1] -1.549004e-16 -2.490889e-17 -6.369905e-18 -1.706621e-17
apply(m_scl, 2, sd)
# [1] 1 1 1 1
See ?scale for more details.
To write your own normalisation function, you could use:
my_scale <- function(x) {
apply(m, 2, function(x) {
(x - mean(x))/sd(x)
})
}
m_scl <- my_scale(m)
or the following, which is probably faster on larger matrices
my_scale <- function(x) sweep(sweep(x, 2, colMeans(x)), 2, apply(x, 2, sd), '/')
Just suggesting another own written normalizing function avoiding apply with is from my experience slower than matrix computation:
m = matrix(rnorm(5000, 2, 3), 50, 100)
m_centred = m - m%*%rep(1,dim(m)[2])%*%rep(1, dim(m)[2])/dim(m)[2]
m_norm = m_centred/sqrt(m_centred^2%*%rep(1,dim(m)[2])/(dim(m)[2]-1))%*%rep(1,dim(m)[2])
## Verirication
rowMeans(m_norm)
apply(m_norm, 1, sd)
(Note that here row vectors are considered)
Related
Title's a little rough, open to suggestions to improve.
I'm trying to calculate time-average covariances for a 500 length vector.
This is the equation we're using
The result I'm hoping for is a vector with an entry for k from 0 to 500 (0 would just be the variance of the whole set).
I've started with something like this, but I know I'll need to reference the gap (i) in the first mean comparison as well:
x <- rnorm(500)
xMean <-mean(x)
i <- seq(1, 500)
dfGam <- data.frame(i)
dfGam$gamma <- (1/(500-dfGam$i))*(sum((x-xMean)*(x[-dfGam$i]-xMean)))
Is it possible to do this using vector math or will I need to use some sort of for loop?
Here's the for loop that I've come up with for the solution:
gamma_func <- function(input_vec) {
output_vec <- c()
input_mean <- mean(input_vec)
iter <- seq(1, length(input_vec)-1)
for(val in iter){
iter2 <- seq((val+1), length(input_vec))
gamma_sum <- 0
for(val2 in iter2){
gamma_sum <- gamma_sum + (input_vec[val2]-input_mean)*(input_vec[val2-val]-input_mean)
}
output_vec[val] <- (1/length(iter2))*gamma_sum
}
return(output_vec)
}
Thanks
Using data.table, mostly for the shift function to make x_{t - k}, you can do this:
library(data.table)
gammabar <- function(k, x){
xbar <- mean(x)
n <- length(x)
df <- data.table(xt = x, xtk = shift(x, k))[!is.na(xtk)]
df[, sum((xt - xbar)*(xtk - xbar))/n]
}
gammabar(k = 10, x)
# [1] -0.1553118
The filter [!is.na(xtk)] starts the sum at t = k + 1, because xtk will be NA for the first k indices due to being shifted by k.
Reproducible x
x <- c(0.376972124936433, 0.301548373935665, -1.0980231706536, -1.13040590360378,
-2.79653431987176, 0.720573498411587, 0.93912102300901, -0.229377746707471,
1.75913134696347, 0.117366786802848, -0.853122822287008, 0.909259181618213,
1.19637295955276, -0.371583903741348, -0.123260233287436, 1.80004311672545,
1.70399587729432, -3.03876460529759, -2.28897494991878, 0.0583034949929225,
2.17436525195634, 1.09818265352131, 0.318220322390854, -0.0731475581637693,
0.834268741278827, 0.198750636733429, 1.29784138432631, 0.936718306241348,
-0.147433193833294, 0.110431994640128, -0.812504663900505, -0.743702167768748,
1.09534507180741, 2.43537370755095, 0.38811846676708, 0.290627670295127,
-0.285598287083935, 0.0760147178373681, -0.560298603759627, 0.447188372143361,
0.908501134499943, -0.505059597708343, -0.301004012157305, -0.726035976548133,
-1.18007702699501, 0.253074712637114, -0.370711296884049, 0.0221795637601637,
0.660044122429767, 0.48879363533552)
I have a question concerning the computation of a double integral in R. Maybe it is not the best software package to try numerical integration, but we are heavily relying on its stochastic optimisation packages (the function to be optimised is very non-trivial, with lots of local minima), so we cannot switch to MATLAB or other packages.
The problem is the following: it takes a whale of a time to compute the double integral using nested integrate functions, and several times more (!) using the hcubature approach from the cubature package. I tried the first solution from this answer (using hcubature from the cubature package), but it made the timing even worse; besides that, infinite integration limits are not supported, and the integration chokes for (-100, 100) interval already. With the second solution (quad2d from pracma package), the timing is great, but the computation result is way off!
The single integral is computed quite quickly (e.g., if the double integrals are commented out, it takes only 0.2 seconds to compute the value of the function, which is tolerable).
Here is a heavily simplified version of the function for the MWE (just to illustrate the point of integration).
library(cubature)
library(pracma)
# Generate some artificial data to try this function on
set.seed(100)
n <- 200
r <- rnorm(n, 0.0004, 0.01)
# Log-likelihood function accepts 3 parameters:
# [1] shape of positive shocks, [2] shape of negative shocks, [3] DoF of Student's distribution for jumps
parm <- c(6, 7, 10)
LL <- function(parm, cub = "default") {
shapes <- parm[1:2]
studdof <- parm[3]
# For simplification, generate some dynamic series
set.seed(101)
sigmaeps <- rgamma(n, shape=shapes[1], rate=1000)
sigmaeta <- rgamma(n, shape=shapes[2], rate=1000)
lambdas <- rgamma(n, shape=10, rate=80)+1
probs <- sapply(lambdas, function(x) dpois(0:2, lambda=x))
probs <- sweep(probs, 2, colSums(probs), FUN="/") # Normalising the probabilities
# Reserving memory for 3 series of density
fw0 <- rep(NA, n)
fw1 <- rep(NA, n)
fw2 <- rep(NA, n)
for (t in 2:n) {
integ0 <- function(e) { # First integrand for 0 jumps
1/sigmaeta[t] * dgamma(-(r[t]-sigmaeps[t]*e)/sigmaeta[t], shape=shapes[2]) * # Density of negative shocks
dgamma(e, shape=shapes[1]) # Density of positive shocks
}
integ1 <- function(e, g) { # Double integrand for 1 jump
1/sigmaeta[t] * dgamma(-(r[t]-sigmaeps[t]*e-1*g)/sigmaeta[t], shape=shapes[2]) * # Density of negative shocks
dgamma(e, shape=shapes[1]) * # Density of positive shocks
dt(g, df = studdof)/1 # Density of jump intensity
}
integ2 <- function(e, g) { # Double integrand for 2 jumps
1/sigmaeta[t] * dgamma(-(r[t]-sigmaeps[t]*e-2*g)/sigmaeta[t], shape=shapes[2]) * # Density of negative shocks
dgamma(e, shape=shapes[1]) * # Density of positive shocks
dt(g, df = studdof)/2 # Density of jump intensity
}
# Wrappers for cubature because they need vector inputs
wrapper1 <- function(x) integ1(x[1], x[2])
wrapper2 <- function(x) integ2(x[1], x[2])
# Single integral that is not a problem
fw0[t] <- integrate(integ0, 0, Inf)$value
if (cub=="cubature") {
# 2D CUBATURE FROM cubature PACKAGE
fw1[t] <- hcubature(wrapper1, c(0, -20), c(20, 20))$integral
fw2[t] <- hcubature(wrapper2, c(0, -20), c(20, 20))$integral
} else if (cub=="prac2d") {
# 2D CUBATURE FROM pracma PACKAGE
fw1[t] <- quad2d(integ1, 0, 100, -100, 100)
fw2[t] <- quad2d(integ2, 0, 100, -100, 100)
} else if (cub=="default") {
# DOUBLE INTEGRALS FROM BUILT-IN INTEGRATE
fw1[t] <- integrate(function(g) { sapply(g, function(g) { integrate(function(e) integ1(e, g), 0, Inf)$value }) }, -Inf, Inf)$value
fw2[t] <- integrate(function(g) { sapply(g, function(g) { integrate(function(e) integ2(e, g), 0, Inf)$value }) }, -Inf, Inf)$value
}
if (!t%%10) print(t)
}
fw <- fw0*probs[1, ] + fw1*probs[2, ] + fw2*probs[3, ]
fw <- log(fw[2:n])
fw[is.nan(fw)] <- -Inf
slfw <- sum(fw)
print(paste0("Point: ", paste(formatC(parm, 4, format="e", digits=3), collapse=" "), ", LL: ", round(slfw, 2)))
return(slfw)
}
system.time(LL(parm, cub="default"))
# 13 seconds
# "Point: 6.000e+00 7.000e+00 1.000e+01, LL: 247.78"
system.time(LL(parm, cub="cubature"))
# 29 seconds, the result is slightly off
# "Point: 6.000e+00 7.000e+00 1.000e+01, LL: 241.7"
system.time(LL(parm, cub="prac2d"))
# 0.5 seconds, the result is way off
# "Point: 6.000e+00 7.000e+00 1.000e+01, LL: 223.25"
(Ideally, integ1(e, g) and integ2(e, g) should be integrated over [0, Inf) w.r.t. e and over (-Inf, Inf) w.r.t. g.)
Parallelisation is done at a higher level (i.e., the stochastic optimiser is computing the values of this likelihood function in parallel), so it is essential that this function run as quickly as possible on a single core.
Is there any way to speed up the computation of this double integral?
Here is a wrapper for hcubature which I use to allow infinite limits:
hcubature.inf <- function() {
cl <- match.call()
cl[[1L]] <- quote(cubature::hcubature)
if(all(is.finite(c(lowerLimit,upperLimit)))) return(eval.parent(cl))
# convert limits to new coordinates to incorporate infinities
cl[['upperLimit']] <- atan(upperLimit)
cl[['lowerLimit']] <- atan(lowerLimit)
# wrap the function with the coordinate transformation
# update argument to hcubature with our function
f <- match.fun(f)
cl[['f']] <- if(!vectorInterface)
function(x, ...) f(tan(x), ...) / prod(cos(x))^2
else
function(x, ...) f(tan(x), ...) / rep(apply(cos(x), 2, prod)^2, each=fDim)
eval.parent(cl)
}
formals(hcubature.inf) <- formals(cubature::hcubature)
Then you should vectorize the integrands:
vwrapper1 <- function(x) as.matrix(integ1(x[1,], x[2,]))
vwrapper2 <- function(x) as.matrix(integ2(x[1,], x[2,]))
And integrate:
if (cub=="cubature.inf") {
fw1[t] <- hcubature.inf(vwrapper1, c(0, -Inf), c(Inf, Inf), vectorInterface=TRUE)$integral
fw2[t] <- hcubature.inf(vwrapper2, c(0, -Inf), c(Inf, Inf), vectorInterface=TRUE)$integral
} else if (cub=="cubature") {
...
You get a value of 242.83 in about half the time of your default method.
This is home work.
I am new to R.
I have two data frames each containing two columns of data. I have to find a function that normalize the first data frame to a mean of 0 and a variance of 1 - for both columns. Then I want to apply that function on the second data frame.
I have tried this:
my_scale_test <- function(x,y) {
apply(y, 2, function(x,y) {
(y - mean(x ))/sd(x)
})
}
where x is the first data frame and y is the data frame to be normalized.
Can some one help me?
Edit:
I have now tried this aswell, but not working either:
scale_func <- function(x,y) {
xmean <- mean(x)
xstd <- sd(x)
yout <- y
for (i in 1:length(x[1,]))
yout[,i] <- yout[,i] - xmean[i]
for (i in 1:length(x[1,]))
yout[,i] <- yout[,i]/xsd[i]
invisible(yout)
}
Edit 2:
I found this working function for MatLab (which i tried to translate in edit 1):
function [ Xout ] = scale( Xbase, Xin )
Xmean = mean(Xbase);
Xstd = std(Xbase);
Xout = Xin;
for i=1:length(Xbase(1,:))
Xout(:,i) = Xout(:,i) - Xmean(i);
end
for i=1:length(Xbase(1,:))
Xout(:,i) = Xout(:,i)/Xstd(i);
end
end
Can someone help me translate it?
Since you are new to R, let's try something really basic.
my_scale_test <- function(x, y) {
y.nrow <- nrow(y)
x.mean <- data.frame(t(apply(x, 2, mean)))
x.sd <- data.frame(t(apply(x, 2, sd)))
# To let x.mean and x.sd have the same dimension as y, let's repeat the rows.
x.mean <- x.mean[rep(1, y.nrow), ]
x.sd <- x.sd[rep(1, y.nrow), ]
(y - x.mean)/x.sd
}
To test, try
set.seed(1)
x <- data.frame(matrix(rnorm(10), nrow = 5))
y <- x
result <- my_scale_test(x, y)
apply(result, 2, mean)
apply(result, 2, sd)
I want to find mean of standard normal distribution in a given interval.
For example, if I divide standard normal distribution into two ([-Inf:0] [0:Inf]) I want to get the mean of each half.
Following code does almost exactly what I want:
divide <- 2
boundaries <- qnorm(seq(0,1,length.out=divide+1))
t <- sort(rnorm(100000))
means.1 <- rep(NA,divide)
for (i in 1:divide) {
means.1[i] <- mean(t[(t>boundaries[i])&(t<boundaries[i+1])])
}
But I need a more precise (and elegant) method to calculate these numbers (means.1).
I tried the following code but it did not work (maybe because of the lack of my probability knowledge).
divide <- 2
boundaries <- qnorm(seq(0,1,length.out=divide+1))
means.2 <- rep(NA,divide)
f <- function(x) {x*dnorm(x)}
for (i in 1:divide) {
means.2[i] <- integrate(f,lower=boundaries[i],upper=boundaries[i+1])$value
}
Any ideas?
Thanks in advance.
The problem is that the integral of dnorm(x) in the interval (-Inf to 0) isn't 1, that's why you got the wrong answer. To correct you must divide the result you got by 0.5 (the integral result). Like:
func <- function(x, ...) x * dnorm(x, ...)
integrate(func, -Inf, 0, mean=0, sd=1)$value / (pnorm(0, mean=0, sd=1) - pnorm(-Inf, mean=0, sd=1))
Adapt it to differents intervals should be easy.
Thanks for answering my question.
I combined all answers as I understand:
divide <- 5
boundaries <- qnorm(seq(0,1,length.out=divide+1))
# My original thinking
t <- sort(rnorm(1e6))
means.1 <- rep(NA,divide)
for (i in 1:divide) {
means.1[i] <- mean(t[((t>boundaries[i])&(t<boundaries[i+1]))])
}
# Based on #DWin
t <- sort(rnorm(1e6))
means.2 <- tapply(t, findInterval(t, boundaries), mean)
# Based on #Rcoster
means.3 <- rep(NA,divide)
f <- function(x, ...) x * dnorm(x, ...)
for (i in 1:divide) {
means.3[i] <- integrate(f, boundaries[i], boundaries[i+1])$value / (pnorm(boundaries[i+1]) - pnorm(boundaries[i]))
}
# Based on #Kith
t <- sort(rnorm(1e6))
means.4 <- rep(NA,divide)
for (i in 1:divide) {
means.4[i] <- fitdistr(t[t > boundaries[i] & t < boundaries[i+1]], densfun="normal")$estimate[1]
}
Results
> means.1
[1] -1.4004895486 -0.5323784986 -0.0002590746 0.5313539906 1.3978177100
> means.2
[1] -1.3993590768 -0.5329465789 -0.0002875593 0.5321381745 1.3990997391
> means.3
[1] -1.399810e+00 -5.319031e-01 1.389222e-16 5.319031e-01 1.399810e+00
> means.4
[1] -1.399057073 -0.531946615 -0.000250952 0.531615180 1.400086731
I believe #Rcoster is the one that I wanted. Rest is innovative approaches compared to mine but still approximate.
Thanks.
You can use a combination of fitdistr and vector indexing.
Here's an example of how to get mean and std of just the positive values:
library("MASS")
x = rnorm(10000)
fitdistr(x[x > 0], densfun="normal")
or just the values in the interval (0,2):
fitdistr(x[x > 0 & x < 2], densfun="normal")
Let's say your cutpoints are -1, 0, 1, and 2 and you are interested in the mean of sections simulating a standard Normal.
samp <- rnorm(1e5)
(res <- tapply(samp, findInterval(samp, c( -1, 0, 1, 2)), mean) )
# 0 1 2 3 4
#-1.5164151 -0.4585519 0.4608587 1.3836470 2.3824633
Please do note that the labeling could be improved. One improvement could be:
names(res) <- paste("[", c(-Inf, -1, 0, 1, 2, Inf)[-6], " , ",
c(-Inf, -1, 0, 1, 2, Inf)[-1], ")", sep="")
> res
[-Inf , -1) [-1 , 0) [0 , 1) [1 , 2) [2 , Inf)
-1.5278185 -0.4623743 0.4621885 1.3834442 2.3835116
Using the distrEx and distr packages:
library(distrEx)
E(Truncate(Norm(mean=0, sd=1), lower=0, upper=Inf))
# [1] 0.797884
(See vignette(distr) in the distrDoc package for an excellent overview of the suite of distr and related packages.)
Or, using just base R, here's an alternative that constructs a discrete approximation of the expectation within the interval between lb and ub. The bases of the approximating rectangles are adjusted so that they all have equal areas (i.e. so that the probability of a point falling in each one of them is identical).
intervalMean <- function(lb, ub, n=1e5, ...) {
## Get x-values at n evenly-spaced quantiles between lower and upper bounds
xx <- qnorm(seq(pnorm(lb, ...), pnorm(ub, ...), length = n), ...)
## Calculate expectation
mean(xx[is.finite(xx)])
}
## Your example
intervalMean(lb=0, ub=1)
# [1] 0.4598626
## The mean of the complete normal distribution
intervalMean(-Inf, Inf)
## [1] -6.141351e-17
## Right half of standard normal distribution
intervalMean(lb=0, ub=Inf)
# [1] 0.7978606
## Right half of normal distribution with mean 0 and standard deviation 100
intervalMean(lb=0, ub=Inf, mean=0, sd=100)
# [1] 79.78606
I have the following code to create a sample function and to generate simulated data
mean_detects<- function(obs,cens) {
detects <- obs[cens==0]
nondetects <- obs[cens==1]
res <- mean(detects)
return(res)
}
mu <-log(1); sigma<- log(3); n_samples=10, n_iterations = 5; p=0.10
dset2 <- function (mu, sigma, n_samples, n_iterations, p) {
X_after <- matrix(NA_real_, nrow = n_iterations, ncol = n_samples)
delta <- matrix(NA_real_, nrow = n_iterations, ncol = n_samples)
lod <- quantile(rlnorm(100000, mu, sigma), p = p)
pct_cens <- numeric(n_iterations)
count <- 1
while(count <= n_iterations) {
X_before <- rlnorm(n_samples, mu, sigma)
X_after[count, ] <- pmax(X_before, lod)
delta [count, ] <- X_before <= lod
pct_cens[count] <- mean(delta[count,])
if (pct_cens [count] > 0 & pct_cens [count] < 1 ) count <- count + 1 }
ave_detects <- mean_detects(X_after,delta) ## how can I use apply or other functions here?
return(ave_detects)
}
If I specify n_iterations, I will have a 1x10 X_after matrix and also 1x10 delta matrix. Then the mean_detects function works fine using this command.
ave_detects <- mean_detects(X_after,delta)
however when I increase n_iterations to say 5, then I will have 2 5x10 X_after and delta then the mean_detects function does not work any more. It only gives me output for 1 iteration instead of 5. My real simulation has thousands of iterations so speed and memory must also be taken into account.
Edits: I edited my code based your comments. The mean_detects function that I created was meant to show an example the use of X_after and delta matrices simultaneously. The real function is very long. That's why I did not post it here.
Your actual question isn't really clear. So,
"My function only takes in 1 dataframe".
Actually your function takes in two vectors
Write code that can use both X_after and delta. This doesn't really mean anything - sorry.
"speed and memory must be taken into account". This is vague. Will your run out of memory? As a suggestion, you could think about a rolling mean. For example,
x = runif(5)
total = 0
for(i in seq_along(x)) {
total = (i-1)*total/i + x[i]/i
cat(i, ": mean ", total, "\n")
}
1 : mean 0.4409
2 : mean 0.5139
3 : mean 0.5596
4 : mean 0.6212
5 : mean 0.6606
Aside
Your dest2 function requires the variable n (which you haven't defined).
Your dest2 function doesn't return an obvious value.
your mean_detects function can be simplified to:
mean(obs[cens==0])