Integration struggeles with "the condition has length > 1" [duplicate] - r

I'm having some problems with integration function in R. I'm trying to plot the integral vo but it seems I'm not doing correctly.
t <- seq(0, 0.04, 0.0001)
vi <- function(x) {5 * sin(2 * pi * 50 * x)}
vo <- function(x) {integrate(vi, lower=0, upper=x)$value}
test_vect = Vectorize(vo, vectorize.args='x')
plot(t, vo(t)) # should be a cosine wave
plot(t, vi(t)) # sine wave
vo should be a sine wave but using test_vect gives me wrong plot and using vo directly gives error 'x' and 'y' lengths differ. Can anyone, please, help me on this matter?

You are already there. Just use plot(t, test_vect(t)). You can't use vo, as integrate is not a vectorized function. There is no problem to evaluate a single point like vo(0.002), but you can not feed it a vector by vo(t). This is why we want Vectorize(vo)(t).
You said that test_vect is not giving the right plot. Sure? We can analytically compute the integral:
v <- function (x) (1-cos(100*pi*x)) / (20*pi)
Then let's compare:
sum(abs(v(t) - test_vect(t)))
# [1] 2.136499e-15
They are the same!

Related

I am beginner in R and I'm trying to solve a system of equations but when i run i get error in R [duplicate]

This question already has an answer here:
Simple for loop in R producing "replacement has length zero" in R
(1 answer)
Closed 4 years ago.
# my error : Error in F[1] <- n/(X[0]) - sum(log(1 + Y^exp(X[1] + X[2] * x))) : replacement has length zero
set.seed(16)
#Inverse Transformation on CDF
n=100
SimRRR.f <- function(100, lambda=1,tau)) {
x= rnorm(100,0,1)
tau= exp(-1-x)
u=runif(100)
y= (1/(u^(1/lambda)-1))^(1/tau)
y
}
Y<-((1/u)-1)^exp(-1-x)
# MLE for Simple Linear Regresion
# System of equations
library(rootSolve)
library(nleqslv)
model <- function(X){
F <- numeric(length(X))
F[1] <- n/(X[0])-sum(log(1+Y^exp(X[1]+X[2]*x)))
F[2] <- 2*n -(X[0]+1)*sum(exp(X[1]+X[2]*x))*Y^( exp(X[1]+X[2]*x))*log(Y)/(1+ Y^( exp(X[1]+X[2]*x)))
F[3] <- sum(x) + sum(x*log(Y))*exp(X[1]+X[2]*x) -(X[0]+1)*X[1]*sum(exp(X[1]+X[2]*x)*Y^(exp(X[1]+X[2]*x)*log(Y)))/(1+ Y^( exp(X[1]+X[2]*x)))
# Solution
F
}
startx <- c(0.5,3,1) # start the answer search here
answers<-as.data.frame(nleqslv(startx,model))
The problem is that you define x, u, tau and y inside the SimRRR function, but are trying to define Y in terms of them outside the function.
Using a function, you give it input, and you get back output. All the other variables defined in the course of the function doing its job go away at the end. As it stands, Y should be a series of NAs (unless you defined the above variables in the global environment as you were working on your function...)
Try the following functions, see if they do the job:
# I usually put all my library calls together at the beginning of the script.
library(rootSolve)
library(nleqslv)
x = rnorm(n,0,1) # see below for why this is pulled out.
SimRRR.f <- function(x, lambda=1,tau)) { # 100 can't be by itself in the function call. everything in there needs to be attached to a variable.
n <- length(x)
tau= exp(-1-x)
u=runif(n)
y= (1/(u^(1/lambda)-1))^(1/tau)
y
}
Y_sim = SimRRR.f(n = 100, lambda = 1, tau = 1) # pick the right tau, it's never defined here.
Your second function has more issues. Namely, it relies on x, which is not defined anywhere that can be found. Either you need x from the previous function, or you really meant X. I'm going to assume you do need the values of x, since X is only of length 3. This is why I pulled it out of the last function call - we need it now.
[Update]
It's also been pointed out in the comments that the indexing here is wrong. I didn't catch that previously (and the F elements are defined correctly). I think I've fixed the indexing issues too now:
model <- function(X, Y, x){ # If you use x and Y in the function, define them here.
n <- length(x)
F <- numeric(length(X))
F[1] <- n/(X[1])-sum(log(1+Y^exp(X[2]+X[3]*x)))
F[2] <- 2*n -(X[1]+1)*sum(exp(X[2]+X[3]*x))*Y^( exp(X[2]+X[3]*x))*log(Y)/(1+ Y^( exp(X[2]+X[3]*x)))
F[3] <- sum(x) + sum(x*log(Y))*exp(X[2]+X[3]*x) -(X[1]+1)*X[2]*sum(exp(X[2]+X[3]*x)*Y^(exp(X[2]+X[3]*x)*log(Y)))/(1+ Y^( exp(X[2]+X[3]*x)))
# Solution
F
}
I'm not familiar with the nleqslv package, but unless there is a method defined to convert it to a data frame, that might not go so well. I'd make sure everything else is working before the conversion.
startx <- c(0.5,3,1) # start the answer search here
answers <- nleqslv(startx,model, Y = Y_sim, x = x)
answer_df <- as.data.frame(answers)

GRG Non-Linear Least Squares (Optimization)

I am trying to convert an Excel spreadsheet that involves the solver function, using GRG Non-Linear to optimize 2 variables that return the lowest sum of squared errors. I have 4 known times (B) at 4 known distances(A). I need to create an optimization function to find what interaction of values for Vmax and Tau produce the lowest sum of squared errors. I have looked at the nls function and nloptr package but can't quite seem to piece them together. Current values for Vmax and Tau are what was determined via the excel solver function, just need to replicate in R. Any and all help would be greatly appreciated. Thank you.
A <- c(0,10, 20, 40)
B <- c(0,1.51, 2.51, 4.32)
Measured <- as.data.frame(cbind(A, B))
Corrected <- Measured
Corrected$B <- Corrected$B + .2
colnames(Corrected) <- c("Distance (yds)", "Time (s)")
Corrected$`X (m)` <- Corrected$`Distance (yds)`*.9144
Vmax = 10.460615006988
Tau = 1.03682513806393
Predicted_X <- c(Vmax * (Corrected$`Time (s)`[1] - Tau + Tau*exp(-Corrected$`Time (s)`[1]/Tau)),
Vmax * (Corrected$`Time (s)`[2] - Tau + Tau*exp(-Corrected$`Time (s)`[2]/Tau)),
Vmax * (Corrected$`Time (s)`[3] - Tau + Tau*exp(-Corrected$`Time (s)`[3]/Tau)),
Vmax * (Corrected$`Time (s)`[4] - Tau + Tau*exp(-Corrected$`Time (s)`[4]/Tau)))
Corrected$`Predicted X (m)` <- Predicted_X
Corrected$`Squared Error` <- (Corrected$`X (m)`-Corrected$`Predicted X (m)`)^2
#Sum_Squared_Error <- sum(Corrected$`Squared Error`)
is your issue still unsolved?
I'm working on a similar problem and I think I could help.
First you have to define a function that will be the sum of the errors, which has for variables Vmax and Tau.
Then you can call an optimisation algorithm that will change these variables and look for a minimum of your function. optim() might be sufficient for your application, but here is the documentation for nloptr:
https://www.rdocumentation.org/packages/nloptr/versions/1.0.4/topics/nloptr
and here is a list of optimisation packages in R:
https://cran.r-project.org/web/views/Optimization.html
Edit:
I quickly recoded the way I would do it. I'm a beginner, so it's probably not the best way but it still works.
A <- c(0,10, 20, 40)
B <- c(0,1.51, 2.51, 4.32)
Measured <- as.data.frame(cbind(A, B))
Corrected <- Measured
Corrected$B <- Corrected$B + .2
colnames(Corrected) <- c("Distance (yds)", "Time (s)")
Corrected$`X (m)` <- Corrected$`Distance (yds)`*.9144
#initialize values
Vmax0 = 15
Tau0 = 5
x0 = c(Vmax0,Tau0)
#define function to optimise: optim will minimize the output
f <- function(x) {
y=0
#variables will be optimise to find the minimum value of f
Vmax = x[1]
Tau = x[2]
Predicted_X <- Vmax * (Corrected$`Time (s)` - Tau + Tau*exp(-Corrected$`Time (s)`/Tau))
y = sum((Predicted_X - Corrected$`X (m)`)^2)
return(y)
}
#call optim: results will be available in variable Y
Y<-optim(x0,f)
If you type Y into the console, you will find that the solver finds the same values as Excel, and convergence is achieved.
In R, there is no need to define columns in data frames with brackets as you did, instead use vectors. You should probably follow a tutorial about this first.
Also it is misleading that you set inital values as values that were already the optimal ones. If you do this then optim() will not optimise further.
Here is the documentation for optim:
https://stat.ethz.ch/R-manual/R-devel/library/stats/html/optim.html
and a tutorial on how to use functions:
https://www.datacamp.com/community/tutorials/functions-in-r-a-tutorial
Cheers

Importance sampling in R

I'm a beginner to statistics and currently learning Importance Sampling. I have searched through similar problems here but still can't get mine solved.
If I need to evaluate E(x) of a target distribution
f(x)=2 * x * exp(-x^2), x>0
By using Importance Sampling, I take a proposal distribution
g(x)=exp(-x)
Then
E(x)=integral(x* (f(x)/g(x)) * g(x) dx)
=integral(exp(-x) * 4 * x^2 dx)
My R code was like this
x=rexp(1000)
w=4*x^2
y=exp(-w)
mean(y)
Am I doing it right?
Thanks a lot for your help!
I think you might want to do something like this:
x<-rexp(n=1000,r=1)
fx<-function(x){
return(x^2*exp(-(x^2)))
}
gx<-function(x){
return(exp(-x))
}
Ex=mean(x*fx(x)/gx(x))
It is simply the weighted sample mean.
Non weighted sample mean mean(x) gives you the expectation of proposal density; while weighted sample mean mean(w * x) gives the expectation of target density. But you are using a wrong weight. I think the correct one is w <- 2 * x * exp(-x^2 + x).
If I were you, I would not compute weights myself. I would do
set.seed(0)
x <- rexp(1000) ## samples from proposal density
f <- function(x) 2 * x *exp(-x^2) ## target density
w <- f(x) / dexp(x) ## importance weights
mean(x) ## non-weighted sample mean
# [1] 1.029677
mean(w * x) ## weighted sample mean
# [1] 0.9380861
In theory, the expectation of weights should be 1. But practically you only get close to 1:
mean(w)
[1] 1.036482
So, you might want the normalized version:
mean(w * x) / mean(w)
[1] 0.9050671

Error in Gradient Descent Calculation

I tried to write a function to calculate gradient descent for a linear regression model. However the answers I was getting does not match the answers I get using the normal equation method.
My sample data is:
df <- data.frame(c(1,5,6),c(3,5,6),c(4,6,8))
with c(4,6,8) being the y values.
lm_gradient_descent <- function(df,learning_rate, y_col=length(df),scale=TRUE){
n_features <- length(df) #n_features is the number of features in the data set
#using mean normalization to scale features
if(scale==TRUE){
for (i in 1:(n_features)){
df[,i] <- (df[,i]-mean(df[,i]))/sd(df[,i])
}
}
y_data <- df[,y_col]
df[,y_col] <- NULL
par <- rep(1,n_features)
df <- merge(1,df)
data_mat <- data.matrix(df)
#we need a temp_arr to store each iteration of parameter values so that we can do a
#simultaneous update
temp_arr <- rep(0,n_features)
diff <- 1
while(diff>0.0000001){
for (i in 1:(n_features)){
temp_arr[i] <- par[i]-learning_rate*sum((data_mat%*%par-y_data)*df[,i])/length(y_data)
}
diff <- par[1]-temp_arr[1]
print(diff)
par <- temp_arr
}
return(par)
}
Running this function,
lm_gradient_descent(df,0.0001,,0)
the results I got were
c(0.9165891,0.6115482,0.5652970)
when I use the normal equation method, I get
c(2,1,0).
Hope someone can shed some light on where I went wrong in this function.
You used the stopping criterion
old parameters - new parameters <= 0.0000001
First of all I think there's an abs() missing if you want to use this criterion (though my ignorance of R may be at fault).
But even if you use
abs(old parameters - new parameters) <= 0.0000001
this is not a good stopping criterion: it only tells you that progress has slowed down, not that it's already sufficiently accurate. Try instead simply to iterate for a fixed number of iterations. Unfortunately it's not that easy to give a good, generally applicable stopping criterion for gradient descent here.
It seems that you have not implemented a bias term. In a linear model like this, you always want to have an additional additive constant, i.e., your model should be like
w_0 + w_1*x_1 + ... + w_n*x_n.
Without the w_0 term, you usually won't get a good fit.
I know this is a couple of weeks old at this point but I'm going to take a stab at for several reasons, namely
Relatively new to R so deciphering your code and rewriting it is good practice for me
Working on a different Gradient Descent problem so this is all fresh to me
Need the stackflow points and
As far as I can tell you never got a working answer.
First, regarding your data structures. You start with a dataframe, rename a column, strip out a vector, then strip out a matrix. It would be a lot easier to just start with an X matrix (capitalized since its component 'features' are referred to as xsubscript i) and a y solution vector.
X <- cbind(c(1,5,6),c(3,5,6))
y <- c(4,6,8)
We can easily see what the desired solutions are, with and without scaling by fitting a linear fit model. (NOTE We only scale X/features and not y/solutions)
> lm(y~X)
Call:
lm(formula = y ~ X)
Coefficients:
(Intercept) X1 X2
-4 -1 3
> lm(y~scale(X))
Call:
lm(formula = y ~ scale(X))
Coefficients:
(Intercept) scale(X)1 scale(X)2
6.000 -2.646 4.583
With regards to your code, one of the beauties of R is that it can perform matrix multiplication which is significantly faster than using loops.
lm_gradient_descent <- function(X, y, learning_rate, scale=TRUE){
if(scale==TRUE){X <- scale(X)}
X <- cbind(1, X)
theta <- rep(0, ncol(X)) #your old temp_arr
diff <- 1
old.error <- sum( (X %*% theta - y)^2 ) / (2*length(y))
while(diff>0.000000001){
theta <- theta - learning_rate * t(X) %*% (X %*% theta - y) / length(y)
new.error <- sum( (X %*% theta - y)^2 ) / (2*length(y))
diff <- abs(old.error - new.error)
old.error <- new.error
}
return(theta)
}
And to show it works...
> lm_gradient_descent(X, y, .01, 0)
[,1]
[1,] -3.9360685
[2,] -0.9851775
[3,] 2.9736566
vs expected of (-4, -1, 3)
For what its worth while I agree with #cfh that I would prefer a loop with a defined number of iterations, I'm actually not sure you need the abs function. If diff < 0 then your function is not converging.
Finally rather than using something like old.error and new.error I'd suggest using a a vector that records all errors. You can then plot that vector to see how quickly your function converges.

Compute multiple Integral and plot them (with R)

I'm having trouble to compute and then plot multiple integral. It would be great if you could help me.
So I have this function
> f = function(x, mu = 30, s = 12){dnorm(x, mu, s)}
which i want to integrate multiple time between z(1:100) to +Inf to plot that with x=z and y = auc :
> auc = Integrate(f, z, Inf)
R return :
Warning message:
In if (is.finite(lower)) { :
the condition has length > 1 and only the first element will be used
I have tested to do a loop :
while(z < 100){
z = 1
auc = integrate(f,z,Inf)
z = z+1}
Doesn't work either ... don't know what to do
(I'm new to R , so I'm already sorry if it is really easy .. )
Thanks for your help :) !
There is no need to do the integrating by hand. pnorm gives the integral from negative infinity to the input for the normal density. You can get the upper tail instead by modifying the lower.tail parameter
z <- 1:100
y <- pnorm(z, mean = 30, sd = 12, lower.tail = FALSE)
plot(z, y)
If you're looking to integrate more complex functions then using integrate will be necessary - but if you're just looking to find probabilities for distributions then there will most likely be a function built in that does the integration for you directly.
Your problem is actually somewhat subtle, and in a certain sense gets to the core of how R works, so here is a slightly longer explanation.
R is a "vectorized" language, which means that just about everything works on vectors. If I have 2 vectors A and B, then A+B is the element-by-element sum of A and B. Nearly all R functions work this way also. If X is a vector, then Y <- exp(X) is also a vector, where each element of Y is the exponential of the corresponding element of X.
The function integrate(...) is one of the few functions in R that is not vectorized. So when you write:
f <- function(x, mu = 30, s = 12){dnorm(x, mu, s)}
auc <- integrate(f, z, Inf)
the integrate(...) function does not know what to do with z when it is a vector. So it takes the first element and complains. Hence the warning message.
There is a special function in R, Vectorize(...) that turns scalar functions into vectorized functions. You would use it this way:
f <- function(x, mu = 30, s = 12){dnorm(x, mu, s)}
auc <- Vectorize(function(z) integrate(f,z,Inf)$value)
z <- 1:100
plot(z,auc(z), type="l") # plot lines

Resources