I have a small MATLAB script mainly doing derivatives using symbolic toolbox that I want to rewrite into R. I chose Ryacas package because I found rSymPy too tricky to install... Here is my R code
# install.packages('Ryacas')
library(Ryacas)
z <- Sym("z")
psi=c()
psi[1]=z^2*exp(-z)/(1-exp(-z))
psi[2]=z^2*exp(-z)/(1-exp(-z))*log(z)
psi[3]=z^2*exp(-z)/(1-exp(-z))*log(z)^2
f=matrix(NA,4,4)
f[1,1]=z^2*exp(-z)/(1-exp(-z))
for(i in 2:4){
f[i,1]=deriv.Sym(psi[i-1],z)
j=2
while(j<=i){
f[i,j]=deriv.Sym(expression(f[i,j-1]/f[j-1,j-1]),z)
j=j+1
}
}
It does not report any error. However, the output shows that R isn't actually doing symbolic computation but returns characters. So I cannot evaluate the result. I tried
> i=2
> deriv.Sym(psi[i-1],z)
expression(((1 - exp(-z)) * (2 * (z * exp(-z)) - z^2 * exp(-z)) -
z^2 * exp(-z)^2)/(1 - exp(-z))^2)
> f[i,1]
[1] "( D( z , 1 ) ( ( ( z ^ 2 ) * ( Exp ( ( - z ) ) ) ) / ( 1 - ( Exp ( ( - z ) ) ) ) ) )"
It seems that deriv.Sym(psi[i-1],z) does the symbolic derivative and get the correct result. But if the result is assigned to a variable, it becomes character class. I feel confused about expression(), yacas(), Sym() and character. Anyone can point out my mistake or help me clarify these concept? Thank you so much.
Below corresponding MATLAB code for reference. The MATLAB code works just fine.
syms c;
psi(1)=c^2*exp(-c)/(1-exp(-c));
psi(2)=c^2*exp(-c)/(1-exp(-c))*log(c);
psi(3)=c^2*exp(-c)/(1-exp(-c))*log(c)^2;
f(1,1)=c^2*exp(-c)/(1-exp(-c));
for i=2:4
f(i,1)=diff(psi(i-1),c);
j=2;
while j<=i
f(i,j)=diff(f(i,j-1)/f(j-1,j-1),c);
j=j+1;
end
end
g11=matlabFunction(f(1,1));
fplot(g11,[0,10])
figure
g22=matlabFunction(f(2,2));
fplot(g22,[0,10])
figure
g33=matlabFunction(f(3,3));
fplot(g33,[0,10])
figure
g44=matlabFunction(f(4,4));
fplot(g44,[0,10])
There are several problems with the R code in the question:
it is attempting to assign an S3 object to elements of a logical matrix:
typeof(NA)
## [1] "logical"
so R has converted it to character (since Sym objects are internally
character) which is as far as it can go. f needs to be defined as a list
with 2 dimensions so that it can hold such objects:
f <- matrix(list(), 4, 4)
since f is a list with 2 dimensions all references to elements of f should use double square brackets as in:
f[[1, 1]] <- z^2 * exp(-z) / (1 - exp(-z))
similarly psi should be initialized as:
psi <- list()
and then referenced as:
psi[[1]] <- z^2 * exp(-z) / (1 - exp(-z))
to evaluate f[[i, 1]] use Eval:
Eval(f[[i, 1]], list(z = 1))
## [1] 0.2432798
This also works but overwrites the Sym object z:
z <- 1
Eval(f[[i, 1]])
in general code should be calling the generic deriv and not by directly going to the specific method deriv.Sym
The revised code is in the section at the end which makes these changes as well as some stylistic improvements.
Suggest you review the vignette that comes with Ryacas. From the R console enter:
vignette("Ryacas")
Also review the Ryacas demos:
demo(package = "Ryacas")
Revised code
# install.packages('Ryacas')
library(Ryacas)
z <- Sym("z")
psi <- list()
psi[[1]] <- z^2 * exp(-z) / (1 - exp(-z))
psi[[2]] <- z^2 * exp(-z) / (1 - exp(-z)) * log(z)
psi[[3]] <- z^2 * exp(-z) / (1 - exp(-z)) * log(z)^2
f <- matrix(list(), 4, 4)
f[[1,1]] <- z^2 * exp(-z) / (1 - exp(-z))
for(i in 2:4) {
f[[i, 1]] <- deriv(psi[[i-1]], z)
j <- 2
while(j <= i) {
f[[i, j]] <- deriv(f[[i, j-1]] / f[[j-1, j-1]], z)
j <- j + 1
}
}
i <- 2
deriv(psi[[i-1]], z)
f[[i, 1]]
Eval(f[[i, 1]], list(z = 1))
Related
Summary of problem
My objective is to create a function called newton.raphson to implement the Newton-Raphson root-finding algorithm.
Root Finding Algorithm: x1 = X0 - f(xo)/f'(x0)
I have 2 arguments:
iter = number of iteration (value = 10^5)
epsilon = for the tolerance (value = 10^-10)
Can not depend on variables outside of the function
newton.raphson <- function(f, x0, iter=1e5, epsilon=1e-10) {
x <- x0
h <- 1e-5
for (t in 1:iter) {
drvt <- f((x+h)) - f((x-h)) / (2 * h)
update <- x - f(x)/ drvt
if (abs(update) < epsilon) {
break
}
x <- update
}
root <- x
return(root)
}
# Define some function to test
f <- function(x) {
x^2 - 4 * x - 7
}
I get the following results:
> newton.raphson(f, 0)
[1] 2.000045
> newton.raphson(f, 3)
[1] 5.000024
But results should be:
-1.316625
5.316625
Your derivative calculation is a little bit broken - you forgot parenthesis around the difference between f(x+h) and f(x-h):
drvt <- ( f(x+h) - f(x-h) ) / (2 * h)
Also, you should compare the difference between the old and new root approximation to the tolerance. In order to make things more clear, rename your misleading update variable to something like new.x. Then, your should check if (abs(new.x - x) < epsilon).
Basically, I have
x<-rnorm(5).
I write a vector which takes integers 1 <= n <= 5 then returns in a vector the result from the series (1/n) * sum_{i=1}^n (1/x_i)
so
n=1 -> 1 * 1/x_1
n=2 -> (1/2) * ( 1/(x_1 + x_2) )
n=3 -> (1/3) * ( 1/(x_1 + x_2 + x_3) )
n=4 -> (1/4) * ( 1/(x_1 + x_2 + x_3 + x_4) )
I wrote this function:
series <- function(n){
n=seq(1,5,1)
x<-rnorm(length(n))
print(x)
return ( (1/n)* (1/sum(x[1:length(x[n])])) )
}
But the result is not true, for example
> series(5)
[1] 1.17810059 0.85472777 -0.55077392 -0.03856963 -0.19404827
[1] 0.8003608 0.4001804 0.2667869 0.2000902 0.1600722
for n=2 -> 1/2 * 1/x_1 + 1/x_2 = (1/2) * (1/(1.17810059+ 0.85472777)) but unfortunately, the result according to my code is 0.4001804!
P.S: I want to write the code without loops and without any function needs calling a library! just to define a simple function using the basic known functions in R and then I can save the result, if needed using Vectorize() or outer()
The sum() function is not vectorized. It collapsing everything down to a single value. instead, you can use cumsum() to get the cumulative some of all the values in the vector thus far.
series <- function(n){
n <- seq(1,5,1)
x <- rnorm(length(n))
print(x)
return((1/n)* (1/cumsum(x)))
}
Building on the basic idea from #MrFlick, you can also do:
1/seq_along(x) * 1/cumsum(x)
[1] -1.7841988 -0.6323886 0.4339966 0.2981289 0.2066433
fellow programmers. I'm studying a book on numerical solutions for economics (Judd 1998). I'm trying to reproduce a problem from that same book in R so I can use the optim package to see if I can get similar results.
The problem established by the author is this one: and his results were these.
I have tried to transcribe this problem to R, which resulted in this code chunk:
DisutilityJudd <- function(L){
if(L == 0){
return(0)
}else{
return(0.1)
}
}
AgentUtilityJudd <- function(w, L){
(-exp(-2*w) + 1) - DisutilityJudd(L)
}
reservation.utility.judd <- AgentUtilityJudd(1, 1)
MaxEffortUtility <- function(w1, w2, L = 1){
0.8 * AgentUtilityJudd(w1, L) + 0.2 * AgentUtilityJudd(w2, L)
}
LeastEffortUtility <- function(w1, w2, L = 0){
0.4 * AgentUtilityJudd(w1, L) + 0.6 * AgentUtilityJudd(w2, L)
}
UtilityDifferenceJudd <- function(w1, w2){
MaxEffortUtility(w1, w2) - LeastEffortUtility(w1, w2)
}
PenaltyFunctionJudd <- function(w1, w2, P = 100000){
if(length(w1) == 2){
y <- -1 * (0.8 * (2 - w1[1]) - 0.2 * w1[2] - P *
(pmax(0, -MaxEffortUtility(w1[1], w1[1]) - reservation.utility.judd))^2 -
P * (pmax(0, -UtilityDifferenceJudd(w1[1], w1[1])))^2)
}else{
y <- -1 * (0.8 * (2 - w1) - 0.2 * w2 - P *
(pmax(0, -MaxEffortUtility(w1, w2) - reservation.utility.judd))^2 -
P * (pmax(0, -UtilityDifferenceJudd(w1, w2)))^2)
}
return(y)
}
There were no errors, but the results generated by my code were nowhere near to what I was expecting:
optim(c(1.1, 0.5), PenaltyFunctionJudd)
$par
[1] 1.343909e+49 -2.370681e+51
$value
[1] -4.633849e+50
$counts
function gradient
501 NA
$convergence
[1] 1
$message
NULL
Perhaps there is a problem to my penalty function. I'm assuming that it is due to the pmax function. Could somebody help me identify it? Thank you, I appreciate your attention.
Edit: a typo.
I believe you meant w1[2] in when if(length(w1) == 2) is true.
I have modified your code, without touching how you define the previous function. It is not clear if it the result expected : what does IV(-1) mean, is it the result minus 1 ? a power if 10 ?
PenaltyFunctionJudd <- function(w1, w2, P = 1e5){
if(length(w1) > 1){
w2 <- w1[2]
w1 <- w1[1]
}
# cat("length is 2 \n")
y <- 0.8 * (2 - w1) - 0.2 * w2 - P *
( pmax(0, -MaxEffortUtility(w1, w2) - reservation.utility.judd) )^2 -
P * ( pmax(0, -UtilityDifferenceJudd(w1, w2)) )^2
# cat("pmax1 :", pmax(0, -MaxEffortUtility(w1, w2) - reservation.utility.judd), "\n")
# cat("pmax2 :", pmax(0, -UtilityDifferenceJudd(w1, w2)), "\n")
return(y)
}
optim(c(1.1, 0.5), PenaltyFunctionJudd, control = list(fnscale = -1) )
optim(c(11, 5), PenaltyFunctionJudd, method = "BFGS", control = list(fnscale = -1, maxit = 100) )
You can use cat or print to check your values (here I noticed some Inf and 0 the leaded me to notice code error).
Friendly warning : provided you defined correctly the previous function, there is lot of instability in optimisation (problem badly set ? More penalty needed ?). Indeed when running twice or more the algorithm parameters fluctuate a lot...
I have two equations. They are as follows:
( 1 - 0.25 ^ {1/alpha} ) * lambda = 85
( 1 - 0.75 ^ {1/alpha} ) * lambda = 11
I would like to compute the values of alpha and lambda by solving the above two equations. How do I do this using R?
One approach is to translate it into an optimization problem by introducing an loss function:
loss <- function(X) {
L = X[1]
a = X[2]
return(sum(c(
(1 - 0.25^(1/a))*L - 85,
(1 - 0.75^(1/a))*L - 11
)^2))
}
nlm(loss, c(-1,-1))
If the result returned from nlm() has a minimum near zero, then estimate will be a vector containing lambda and alpha. When I tried this, I got an answer that passed the sniff test:
> a = -1.28799
> L = -43.95321
> (1 - 0.25^(1/a))*L
[1] 84.99999
> (1 - 0.75^(1/a))*L
[1] 11.00005
#olooney's answer is best.
Another way to solve these equations is to use uniroot function. We can cancel the lambda values and can use the uniroot to find the value of alpha. Then substitute back to find lambda.
f <- function(x) {
(11/85) - ((1 - (0.75) ^ (1/x)) / (1 - (0.25) ^ (1/x)) )
}
f_alpha <- uniroot(f, lower = -10, upper = -1, extendInt = "yes")
f_lambda <- function(x) {
11 - ((1 - (0.75) ^ (1/f_alpha$root)) * x)
}
lambda = uniroot(f_lambda, lower = -10, upper = -2, extendInt = "yes")$root
sprintf("Alpha equals %f", f_alpha$root)
sprintf("Lambda equals %f", lambda)
results in
[1] "Alpha equals -1.287978"
[1] "Lambda equals -43.952544"
I'm writing a function for Gaussian mixture models with spherical covariance structures--ie $\Sigma_k = \sigma_k^2 I$. This particular function is similar to the mclust package with identifier VII.
http://en.wikipedia.org/wiki/Mixture_model
Anyways, the problem I'm having is running into infinite values for the weight matrix. Definition: Let W be an n x m matrix where n = 1, ..., n (number of obs) and m = 1, ..., m (number of mixtues). Each element of W (ie w_ij) can essentially be defined as a specific form of:
w_im = \frac{a / b * exp(c)}{\sum_i=1^m [a_i / b_i * exp(c_i)]}
Computing this numerically is giving me infinite values. So I'm trying to use the log-identity log(x+y) = log(x) + log(1 + y/x). But the issue is that it's not as simple as log(x+y) but rather log(\sum_i=1^m [a_i / b_i * exp(c_i)]).
Here's some code define:
n_im = a / b * exp(c) ;
d_.m = \sum_i=1^m [a_i / b_i * exp(c_i)] ; and
c_mat[i,j] as the value of the exponent for the [i,j]th term.
n_mat[, i] <- log(a[i]) - log(b[i]) - c[,i] # numerator of w_im
internal_vec1[i] <- (a[i] * b[1])/ (a[1] * b[i]) # an internal for the step below
c_mat2 <- cbind(rep(1, n), c_mat[,1] - c_mat[,-1]) # since e^a / e^b = e^(a-b)
for (i in 1:n) {
d_vec[i] <- n_mat[i,1] + log(sum(internal_vec1 * exp(c_mat2[i,)))
} ## still getting infinite values
I'm trying to define the problem as briefly as possible. the entire function is obviously much larger than this. But, since the problem I'm running into is specifically dealing with infinite (and 1/infinity) values, I'm hoping this snippet is sufficient. Anyone with a coding trick here?
Here is the solution!! (I've spent way too damn long on this)
**The first function log_plus() solves the simple problem where you want log(\sum_{i=1)^n x_i)
**The second function log_plus2() solves the more complicated problem described above where you want log(\sum_{i=1}^n [a_i / b_i * exp(c_i)])
log_plus <- function(xvec) {
m <- length(xvec)
x <- log(xvec[1])
for (j in 2:m) {
sum_j <- sum(xvec[1:j-1])
x <- x + log(1 + xvec[j]/sum_j)
}
return(x)
}
log_plus2 <- function(a, b, c) {
# assumes intended input of form sum(a/b * e^c)
if ((length(a) != length(b)) || (length(a) != length(c))) {
stop("Input equal length vectors")
}
if (!(all(c > 0) || all(c < 0))) {
stop("All values of c must be either > 0 or < 0.")
}
m <- length(a)
# initilialize log sum
x <- log(a[1]) - log(b[1]) + c[1]
# aggregate / loop log sum
for (j in 2:m) {
# build denominator
b2 <- b[1:j-1]
for (i in 1:j-1) {
d1 <- 0
c2 <- c[1:i]
if (all(c2 > 0)) {
c_min <- min(c2[1:j-1])
c2 <- c2 - c_min
} else if (all(c2 < 0)) {
c_min <- max(c2[1:j-1])
c2 <- c2 - c_min
}
d1 <- d1 + a[i] * prod(b2[-i]) * exp(c2[i])
}
den <- b[j] * (d1)
num <- a[j] * prod(b[1:j-1]) * exp(c[j] - c_min)
x <- x + log(1 + num / den)
}
return(x)
}