I want to minimize function
f <- function(u){
return(-(1+u[1]+u[2]+u[3]+u[4]))
}
with gradient grad
And I have constraints:
1) u[1]+u[2]+u[3]+u[4] = 1
2) 0<=u[1]<=1, 0<=u[2]<=1, 0<=u[3]<=1, 0<=u[4]<=1
How to make it correctly? I can make it only for 2 constraint
optim(par=c(0,0,0,0), fn=f,lower=c(0, 0, 0, 0), upper=c(1, 1, 1, 1),method="L-BFGS-B")
But 1 constraint is not true in this case
Maybe you can try fmincon from package pracma like below
pracma::fmincon(c(0,0,0,0),
f,
gr = grad,
Aeq = cbind(1,1,1,1),
beq = 1,
lb = c(0,0,0,0),
ub = c(1,1,1,1))
Related
I am trying to estimate a state-space model to obtain the potential output (y_p) from data on output (y) and the unemployment rate (u) using R. The model is already programmed in EViews and I simply want to reproduce its results. The model is described by the following eqations (with time indizes):
signal equations:
(i) y_t = y_p_t + eps_y_t
(ii) u_t = beta_0 + beta_1(y_t-y_p_t) + eps_u_t
state equations:
(iii) y_p_t = y_p_(t-1) + g_(t-1)
(iv) g_t = g_(t-1) + eps_g_t
I have tried different packages. But there are different problems: Either there are no intercepts allowed (dlm package) or there is no smoother function (FKF package). So I do have two questions, either of them answered would solve my problem. The first (Questions 1a and 1b) relates to the specification of an appropriate state-space model in the dlm-package; the second (Question 2) relates to a smoothing function that could be used with the FKF package.
Question 1a. In the dlm-package no intercepts are allowed. So I put beta_0 and the output gap (gap_t = y_t-y_p_t) into the state vector using the JGG-matrix to reference to the y_t-data and tried to estimate beta_1 subsequently via maximum likelihood. However, I didn't obtain reasonable results.
# States: x(1) y_pot, x(2) growth, x(3) y_gap, x(4) beta_0
# Signal: y(1) y, y(2) u
beta_1 <- -0.2
beta_0 <- 0.03
# Measurement
FF <- matrix(c(1, 0, 0, 0,
0, beta_1, 0, 1), 2, 4)
# Transition
GG <- matrix(c(1, 0, -1, 0,
1, 1, -1, 0,
0, 0, 1, 0,
0, 0, 0, beta_0), 4, 4)
JGG <- matrix(c(0, 0, 0, 0,
0, 0, 0, 0,
0, 0, 1, 0,
0, 0, 0, 0), 4, 4)
# Covariance Transition
W <- diag(1e-2, 4)
# Covariance Measurement
V <- matrix(c(1e-2, 0,
0, 1e-2), 2, 2)
m0 <- c(11.4, 0.04, 0, 0.03)
C0 <- diag(1, 4) # 1e-7
C0[3,3] <- 0.1
C0[4,4] <- 0.1
# Now bring them into the dlm-object
myMod <- dlm(FF = FF,
GG = GG,
JGG = JGG,
X = dataMLE,
W = W,
V = V,
m0 = m0,
C0 = C0)
buildFun <- function(theta) {
V(myMod)[1,1] <- lambda_ss*exp(theta[1])
V(myMod)[2,2] <- exp(theta[2])
W(myMod)[2,2] <- exp(theta[1])
FF(myMod)[2,3] <- theta[3]
return(myMod)
}
myMod.mle <- dlmMLE(y = dataMLE, parm = c(-10, -10, -.2),
build = buildFun,
lower = c(rep(-1e6, 3)),
upper = c(rep(1e6, 3)),
control = list(trace = 1, REPORT = 5, maxit = 1000))
Question 1b. I've also tried to use the state vector x(1) y_pot, x(2) growth, x(3) beta_1, x(4) beta_0, and to use JFF to get the y_t-data for the output-gap-calculation... but this approach was not sucessfull either.
Question 1: Do you know of a way in which this rather simple model could be implemented within the dlm-package? The problems are the incercepts on the one hand and on the other the interaction of the beta_1-estimation with the ouput-gap, which consists itself of one state-variable and one external signal.
A more promising approach seemed to be to use the FKF-package. However, no smoother function is provided within this package.
Question 2: Is there a way to obtain the smoothed output instead of the Kalman-filtered output usind the FKF-package?
I deepely appreciate any help on this problem!
Thank you a lot!
Samuel
I'm using the nloptr package and everything works well. But I need a way to define the objective function and the constraints in a faster way. I can't write all the settings by hand each time.
For example, I want to solve this problem:
library(nloptr)
eval_f <- function(x){
return(x[4]^2+x[7]^2+x[9]^2)
}
x0 = c(1,1,1,1,0.5,0,0.5,1,0)
hin <- function(x){
h <- numeric(6)
h[1] = x[1]+x[4]-x[2]-x[5]-0.01
h[2] = x[1]+x[4]-x[3]-x[6]-0.01
h[3] = x[2]+x[5]-x[3]-x[6]-0.01
h[4] = x[2]+x[8]-x[1]-x[7]-0.01
h[5] = x[2]+x[8]-x[3]-x[9]-0.01
h[6] = x[1]+x[7]-x[3]-x[9]-0.01
return(h)
}
heq <- function(x){
h <- numeric(1)
h[1] <- x[1]+x[2]+x[3]-3
return(h)
}
res <- slsqp(x0=x0,fn=eval_f,hin = hin,heq = heq)
Everything works.
But I want to define the objective function in a faster way. Can I pass another argument (the indices) to the function in an automatic way? For example:
eval_f <- function(x,indices){
return(x[indices]^2)
}
I tried but I have an error.
The ... argument to slsqp allows you to pass arbitrary arguments through to the objective function. So define a new objective function that takes indices as an argument:
eval_f2 <- function(x,indices){
return(sum(x[indices]^2))
}
... and include indices=c(4,7,9) (to match your previous objective function's definition):
res2 <- slsqp(x0=x0,fn=eval_f2, hin = hin,heq = heq, indices=c(4,7,9))
Check the solution:
all.equal(res$par,res2$par) ## TRUE
factories
More generally, you can define a factory - a function that returns a function. This works because functions have associated environments in which variables (such as the indices) can be stored. This will work even in cases where the top-level function doesn't allow arbitrary arguments to be passed through (and may e.g. be important if you want to use different sets of indices for your objective and constraint functions ...)
eval_factory <- function(indices) {
fun <- function(x) {
return(sum(x[indices]^2))
}
return(fun)
}
res3 <- slsqp(x0=x0, fn=eval_factory(indices=c(4,7,9)),
hin = hin,heq = heq)
all.equal(res$par,res3$par) ## TRUE
factory for hin
hin_factory <- function(A,b) {
fun <- function(x) {
return((A %*% x) + b)
}
return(fun)
}
A0 <- matrix(c(1, -1, 0, 1,-1, 0, 0, 0, 0,
1, 0, -1, 1, 0, -1, 0, 0, 0,
0, 1, -1, 0, 1, -1, 0, 0, 0,
-1, 1, 0, 0, 0, 0,-1, 1, 0,
0, 1, -1, 0, 0, 0, 0, 1, -1,
1, 0, -1, 0, 0, 0, 1, 0, -1),
byrow=TRUE,ncol=9)
all.equal(c(hin_factory(A0,-0.01)(x0)),hin(x0))
res4 <- slsqp(x0=x0, fn=eval_factory(indices=c(4,7,9)),
hin = hin_factory(A0,b=-0.01), heq = heq)
all.equal(res$par, res4$par)
I'm trying to have R substitute c(1/2, 1, sqrt(2)/2 ) for rscale = argument using sapply(). But I'm wondering why I'm getting 3 same answers (should get 3 different answers)?
ttype = 1
t = -.742
N1 = 102
N2 = ifelse(ttype==1, NA, 102)
rscale = sqrt(2)/2
tl = 1
dexp = -1
library(BayesFactor)
Gi1 <- ttest.tstat(t, N1, ifelse(ttype==1, F, N2),nullInterval =
c(ifelse(dexp==-1, -Inf, Inf), ifelse(tl==1, 0, Inf)),rscale = rscale, simple = TRUE)
UrUr <- sapply(c(1/2, 1, sqrt(2)/2 ), function(rscale) Gi1 )## HERE I get 3 same answers!
As #HubertL said, Gi1 is a number, not a function. You need to write a function that takes in a parameter and calculates the ttest.tstat on it, plugging the new variable into the "rscale" parameter. For example,
library(BayesFactor)
Gi1 <- function(x) {
ttest.tstat(t, N1, ifelse(ttype==1, F, N2),
nullInterval = c(ifelse(dexp==-1, -Inf, Inf),
ifelse(tl==1, 0, Inf)),rscale = x, simple = TRUE) }
UrUr <- sapply(c(1/2, 1, sqrt(2)/2 ), Gi1)
UrUr
And you should get three different answers.
Agreed with the previous answers. You can also try using sapply like this:
sapply(c(1/2, 1, sqrt(2)/2), function(x) ttest.tstat(t, N1, ifelse(ttype==1, F, N2),nullInterval = c(ifelse(dexp==-1, -Inf, Inf), ifelse(tl==1, 0, Inf)),rscale = x, simple = TRUE))
Sapply will then cycle through your vector using the parameter "x" as a placeholder for each element in your vector c.
I was wondering how one extracts the estimated parameters stored in an R object of class mle-class.
Here is an example:
x <- matrix(rnorm(300), ncol = 3)
x[x > 1] <- 1
require(tmvtnorm)
fit1 <- mle.tmvnorm(X = x, lower = rep(-Inf, 3), upper = rep(1, 3))
Now, fit1 is an object of class:
class(fit1)
[1] "mle"
attr(,"package")
[1] "stats4
"
fit1 itself gives me:
fit1
Call:
mle(minuslogl = function (mu_1 = 0, mu_2 = 0, mu_3 = 0, sigma_1.1 = 1,
sigma_1.2 = 0, sigma_1.3 = 0, sigma_2.2 = 1, sigma_2.3 = 0,
sigma_3.3 = 1)
{
nf <- names(formals())
theta <- sapply(nf, function(x) {
eval(parse(text = x))
})
mean <- theta[1:n]
if (cholesky) {
L <- inv_vech(theta[-(1:n)])
L[lower.tri(L, diag = FALSE)] <- 0
sigma <- t(L) %*% L
}
else {
sigma <- inv_vech(theta[-(1:n)])
}
if (det(sigma) <= 0 || any(diag(sigma) < 0)) {
return(.Machine$integer.max)
}
f <- -(sum(dmvnorm(X, mean, sigma, log = TRUE)) - nrow(X) *
log(pmvnorm(lower = lower, upper = upper, mean = mean,
sigma = sigma)))
if (is.infinite(f) || is.na(f)) {
return(.Machine$integer.max)
}
f
}, start = as.list(c(0, 0, 0, 1, 0, 0, 1, 0, 1)), method = "BFGS",
fixed = list())
Coefficients:
mu_1 mu_2 mu_3 sigma_1.1 sigma_1.2 sigma_1.3
0.64218198 1.51720543 0.97047201 1.73395947 -0.03889188 0.14627774
sigma_2.2 sigma_2.3 sigma_3.3
2.18020597 0.38822509 1.49854600
My question is: how do I extract these coefficients from the object fit1?
Thanks again for your time, and for your help in answering this question!
coef is a generic function which extracts model coefficients from objects returned by modeling functions. coefficients is an alias for it.
Usage
coef(object, ...)
coefficients(object, ...)
So, fit1#coef should work.
https://stat.ethz.ch/R-manual/R-devel/library/stats/html/coef.html
Sorry for this silly question: I will keep it just in case someone ends up looking.
fit1#coef
mu_1 mu_2 mu_3 sigma_1.1 sigma_1.2 sigma_1.3
0.64218198 1.51720543 0.97047201 1.73395947 -0.03889188 0.14627774
sigma_2.2 sigma_2.3 sigma_3.3
2.18020597 0.38822509 1.49854600
solves the query. Duh!
I have a simple question on R. This is a simple code to generate random variables from a bivariate normal clayton copula with normally distributed margins. How could I do this neatly if I had d equally distributed margins, without having to write c("norm","norm","norm", ... ) etc.?
myMvd1 <- mvdc(copula = archmCopula(family = "clayton", param = 2),
margins = c("norm", "norm"), paramMargins = list(list(mean = 0,
sd = 1), list(mean = 0, sd = 1)))
You can use rep:
d <- 5
mvdc(copula = archmCopula(family = "clayton", param = 2),
margins = rep("norm", d),
paramMargins = rep(list(list(mean = 0, sd = 1)), d))
(And not knowing what this is about, I am not sure if param should be 2 or d.)
You can do something like this :
matrix(rMvdc(d*nRow, myMvd1),nRow,d)