I would like to find numerical derivatives of a bivariate function.
The function is defined by myself
I need first derivatives with respect to each argument and cross second derivative
Is there a package or built-in function to do this?
Install and load the numDeriv package.
library(numDeriv)
f <- function(x) {
a <- x[1]; b <- x[2]; c <- x[3]
sin(a^2*(abs(cos(b))^c))
}
grad(f,x=1:3)
## [1] 0.14376097 0.47118519 -0.06301885
hessian(f,x=1:3)
## [,1] [,2] [,3]
## [1,] 0.1422651 0.9374675 -0.12538196
## [2,] 0.9374675 1.8274058 -0.25388515
## [3,] -0.1253820 -0.2538852 0.05496226
(My example is trivariate rather than bivariate, but it will obviously work for a bivariate function as well.) See the help pages for more information on how the gradient and especially Hessian computations are done.
Related
As you know, the Gradient of a function is the following vector:
and the Hessian is the following matrix:
Now, I wonder, is there any way to calculate these in R for a user defined function at a given point?
First, I've found a package named numDeriv, which seems to have the necessary functions grad and hessian but now I can't get the correct results... Thus, here's my workflow:
Let's say that we are given the function f(x,y) = x^2 * x^3, and we need to calculate the Gradient and the Hessian at the point (x=1, y=2).
That's been said, I define this function within R:
dummy <- function(x,y) {
rez <- (z^2)*(y^3)
rez
}
and then use grad the following way:
grad(func=dummy, x=1, y=2)
which gives me result 16 -- and the problem is that this only the first value from a gradient vector, the correct version of which is
[16, 12]
Same goes with the hessian:
hessian(func=dummy, x=1, y=2)
which gives my 1x1 matrix with the value 16 instead of the 2x2 matrix
[,1] [,2]
[1,] 16 24
[2,] 24 12
So, the question is what am I doing wrong?
Thank you.
You can use the pracma library, such as:
library(pracma)
dummy <- function(x) {
z <- x[1]; y <- x[2]
rez <- (z^2)*(y^3)
rez
}
grad(dummy, c(1,2))
[1] 16 12
hessian(dummy, c(1,2))
[,1] [,2]
[1,] 16 24
[2,] 24 12
The following code is an extension of the answer provided. It treats the case where you have the values of the function and not the actual function. Here the function has 1 parameter. The Grad function calculates in a single point. If you have 3 parameters then you need to provide them to x0 with c(x1,x2,x3).
#i is an index, s_carvone$val contains the values of the function
dummy <- function(i)
{
return (s_carvone$val[i])
}
#function that calculates the gradient in a specific point i
calc_grad <- function(i)
{
return (pracma::grad(dummy, x0=i, heps=1))
}
#calculates the derivative from point 2 to 61
first_derivative = unlist(purrr::map(calc_grad, .x = c(2:61)));
plot(first_derivative);
This question already has answers here:
A^k for matrix multiplication in R?
(6 answers)
Closed 9 years ago.
I'm trying to compute the -0.5 power of the following matrix:
S <- matrix(c(0.088150041, 0.001017491 , 0.001017491, 0.084634294),nrow=2)
In Matlab, the result is (S^(-0.5)):
S^(-0.5)
ans =
3.3683 -0.0200
-0.0200 3.4376
> library(expm)
> solve(sqrtm(S))
[,1] [,2]
[1,] 3.36830328 -0.02004191
[2,] -0.02004191 3.43755429
After some time, the following solution came up:
"%^%" <- function(S, power)
with(eigen(S), vectors %*% (values^power * t(vectors)))
S%^%(-0.5)
The result gives the expected answer:
[,1] [,2]
[1,] 3.36830328 -0.02004191
[2,] -0.02004191 3.43755430
The square root of a matrix is not necessarily unique (most real numbers have at least 2 square roots, so it is not just matricies). There are multiple algorithms for generating a square root of a matrix. Others have shown the approach using expm and eigenvalues, but the Cholesky decomposition is another possibility (see the chol function).
To extend this answer beyond square roots, the following function exp.mat() generalizes the "Moore–Penrose pseudoinverse" of a matrix and allows for one to calculate the exponentiation of a matrix via a Singular Value Decomposition (SVD) (even works for non square matrices, although I don't know when one would need that).
exp.mat() function:
#The exp.mat function performs can calculate the pseudoinverse of a matrix (EXP=-1)
#and other exponents of matrices, such as square roots (EXP=0.5) or square root of
#its inverse (EXP=-0.5).
#The function arguments are a matrix (MAT), an exponent (EXP), and a tolerance
#level for non-zero singular values.
exp.mat<-function(MAT, EXP, tol=NULL){
MAT <- as.matrix(MAT)
matdim <- dim(MAT)
if(is.null(tol)){
tol=min(1e-7, .Machine$double.eps*max(matdim)*max(MAT))
}
if(matdim[1]>=matdim[2]){
svd1 <- svd(MAT)
keep <- which(svd1$d > tol)
res <- t(svd1$u[,keep]%*%diag(svd1$d[keep]^EXP, nrow=length(keep))%*%t(svd1$v[,keep]))
}
if(matdim[1]<matdim[2]){
svd1 <- svd(t(MAT))
keep <- which(svd1$d > tol)
res <- svd1$u[,keep]%*%diag(svd1$d[keep]^EXP, nrow=length(keep))%*%t(svd1$v[,keep])
}
return(res)
}
Example
S <- matrix(c(0.088150041, 0.001017491 , 0.001017491, 0.084634294),nrow=2)
exp.mat(S, -0.5)
# [,1] [,2]
#[1,] 3.36830328 -0.02004191
#[2,] -0.02004191 3.43755429
Other examples can be found here.
I would like to solve an equation as below, where the X is the only unknown variable and function f() is a multi-variate Student t distribution.
More precisely, I have a multi k-dimensional integral for a student density function, which gives us a probability as a result, and I know that this probability is given as q. The lower bound for all integral is -Inf and I know the last k-1 dimension's upper bound (as given), the only unknown variable is the first integral's upper bound. It should have an solution for a variable and one equation. I tried to solve it in R. I did Dynamic Conditional Correlation to have a correlation matrix in order to specify my t-distribution. So plug this correlation matrix into my multi t distribution "dmvt", and use the "adaptIntegral" function from "cubature" package to construct a function as an argument to the command "uniroot" to solve the upper bound on the first integral. But I have some difficulties to achieve what I want to get. (I hope my question is clear) I have provided my codes before, somebody told me that there is problem, but cannot find why there is an issue there. Many thanks in advance for your help.
I now how to deal with it with one dimension integral, but I don't know how a multi-dimension integral equation can be solved in R? (e.g. for 2 dimension case)
\int_{-\infty}^{X}
\int_{-\infty}^{Y_{1}} \cdots
\int_{-\infty}^{Y_{k}}
f(x,y_{1},\cdots y_{k})
d_{x}d_{y_{1},}\cdots d_{y_{k}} = q
This code fails:
require(cubature)
require(mvtnorm)
corr <- matrix(c(1,0.8,0.8,1),2,2)
f <- function(x){ dmvt(x,sigma=corr,df=3) }
g <- function(y) adaptIntegrate(f,
lowerLimit = c( -Inf, -Inf),
upperLimit = c(y, -0.1023071))$integral-0.0001
uniroot( g, c(-2, 2))
Since mvtnorm includes a pmvt function that computes the CDF of the multivariate t distribution, you don't need to do the integral by brute force. (mvtnorm also includes a quantile function qmvt, but only for "equicoordinate" values.)
So:
library(mvtnorm)
g <- function(y1_upr,y2_upr=-0.123071,target=1e-4,df=3) {
pmvt(upper=c(y1_upr,y2_upr),df=df)-target
}
uniroot(g,c(-10000,0))
## $root
## [1] -17.55139
##
## $f.root
## [1] -1.699876e-11
## attr(,"error")
## [1] 1e-15
## attr(,"msg")
## [1] "Normal Completion"
##
## $iter
## [1] 18
##
## $estim.prec
## [1] 6.103516e-05
##
Double-check:
pmvt(upper=c(-17.55139,-0.123071),df=3)
## [1] 1e-04
## attr(,"error")
## [1] 1e-15
## attr(,"msg")
## [1] "Normal Completion"
Is there a function that can convert a covariance matrix built using log-returns into a covariance matrix based on simple arithmetic returns?
Motivation: We'd like to use a mean-variance utility function where expected returns and variance is specified in arithmetic terms. However, estimating returns and covariances is often performed with log-returns because of the additivity property of log returns, and we assume asset prices follow a lognormal stochastic process.
Meucci describes a process to generate a arithmetic-returns based covariance matrix for a generic/arbitrary distribution of lognormal returns on Appendix page 5.
Here's my translation of the formulae:
linreturn <- function(mu,Sigma) {
m <- exp(mu+diag(Sigma)/2)-1
x1 <- outer(mu,mu,"+")
x2 <- outer(diag(Sigma),diag(Sigma),"+")/2
S <- exp(x1+x2)*(exp(Sigma)-1)
list(mean=m,vcov=S)
}
edit: fixed -1 issue based on comments.
Try an example:
m1 <- c(1,2)
S1 <- matrix(c(1,0.2,0.2,1),nrow=2)
Generate multivariate log-normal returns:
set.seed(1001)
r1 <- exp(MASS::mvrnorm(200000,mu=m1,Sigma=S1))-1
colMeans(r1)
## [1] 3.485976 11.214211
var(r1)
## [,1] [,2]
## [1,] 34.4021 12.4062
## [2,] 12.4062 263.7382
Compare with expected results from formulae:
linreturn(m1,S1)
## $mean
## [1] 3.481689 11.182494
## $vcov
## [,1] [,2]
## [1,] 34.51261 12.08818
## [2,] 12.08818 255.01563
I am trying to write a function that uses Newton's method (coefficients+(inverse hessian)*gradient) to iteratively find the coefficients for a loglinear model.
I am using the following code:
##reading in the data
dat<-read.csv('hw8.csv')
summary(dat)
# data file containing yi and xi
attach(dat)
##Creating column of x's
x<-cbind(1,xi)
mle<-function(c){
gi<- 1-yi*exp(c[1]+c[2]*xi)
hi<- gi-1
H<- -1*(t(x)%*%hi%*%x)
g<-t(x)%*%gi
c<-c+solve(H)%*%g
return(c)
}
optim(c(0,1),mle,hessian=TRUE)
When I run the code, I get the following error:
Error in t(x) %*% hi %*% x : non-conformable arguments
RMate stopped at line 29
Given that the formula is drawn from Bill Greene's problem set, I don't think it is a formula problem. I think I am doing something wrong in passing my function.
How can I fix this?
Any help with this function would be much appreciated.
As Jonathan said in the comments, you need proper dimensions:
R> X <- matrix(1:4, ncol=2)
R> t(X) %*% X
[,1] [,2]
[1,] 5 11
[2,] 11 25
R>
But you also should use the proper tools so maybe look at the loglin function in the stats package, and/or the loglm function in the MASS package. Both will be installed by default with your R installation.