usually any package of any distribution in R contains three commands as for normal distribution we have rnorm, dnorm and pnorm. I found a package called kdevine enter link description here
and it has dkdevinecop and rkdevinecop but it doesn't have the option pkdevinecop (the CDF).
I tried to write it like this but it is wrong, could somebody see it please
library(kdevine)
library(kdecopula)
data(wdbc)
fit <- kdevine(wdbc[, 5:7], xmin = rep(0, 3))
f<-dkdevine(wdbc[, 5:7], fit)
for(i in 1:length(f)){
p<-sum(dkdevinecop(c(wdbc[, 5],wdbc[,6],wdbc[,7]),fit))
}
print (p)
Related
Using the dlm package in R I fit a dynamic linear model to a time series data set, consisting of 20 observations. I then use the dlmForecast function to predict future values (which I can validate against the genuine data for said period).
I use the following code to create a prediction interval;
ciTheory <- (outer(sapply(fut1$Q, FUN=function(x) sqrt(diag(x))), qnorm(c(0.05,0.95))) +
as.vector(t(fut1$f)))
However my data does not follow a normal distribution and I wondered whether it would be possible to
adapt the qnorm function for other distributions. I have tried qt, but am unable to apply qgamma.......
Just wondered if anyone knew how you would go about sorting this.....
Below is a reproduced version of my code...
library(dlm)
data <- c(20.68502, 17.28549, 12.18363, 13.53479, 15.38779, 16.14770, 20.17536, 43.39321, 42.91027, 49.41402, 59.22262, 55.42043)
mod.build <- function(par) {
dlmModPoly(1, dV = exp(par[1]), dW = exp(par[2]))
}
# Returns most likely estimate of relevant values for parameters
mle <- dlmMLE(a2, rep(0,2), mod.build); #nileMLE$conv
if(mle$convergence==0) print("converged") else print("did not converge")
mod1 <- dlmModPoly(dV = v, dW = c(0, w))
mod1Filt <- dlmFilter(a1, mod1)
fut1 <- dlmForecast(mod1Filt, n = 7)
Cheers
I am a newbie in R and searched in several forums but didn't got an answer so far. We are asked to do a maximum likelihood estimation in R for an AR(1) model without using the arima() command. We should estimate the intercept alpha, the coefficient beta and the variance sigma2. The data should be following a normal distribution, where I derived the log-likelihood function from. I was then trying to program the function with the following code:
Y <- data$V2
nlogL <- function(theta,Y){
alpha <- theta[1]
rho <- theta[2]
sigma2 <- theta[3]
logl <- -(100/2)*log(2*pi) - (100/2)*log(theta[3]) - (0.5*theta[3])*sum(Y-(theta[1]/(1-theta[2]))**2)
return(-logl)
}
par0 <- c(0.1,0.1,0.1)
opt <- optim(par0, nlogL, hessian = TRUE)
When running this code I always get the error message: Error in Y - (theta[1]/(1 - theta[2]))^2 : 'Y' is missing.
It would be great if you could have a look whether the likelihood function is derived correctly.
Thank you very much in advance for your help!
Your nlogL function should only take a single argument, theta. So you can fix your immediate problem simply by removing the 2nd argument to the function, and the Y variable would be resolved by its definition outside of nlogL. Alternatively, you could keep the signature of nlogL as-is and pass Y as an additional argument through optim like this: optim(par0, nlogL, hessian = TRUE, Y=Y). Also I would second chinsoon12's suggestion to review ?optim.
I'm estimating a SVAR in R, but the A-B form results are very different than in Eviews, I'm not sure why it happened. Also, the option I think is right gives me error message. Could anyone help me?
Here is the R code I'm using:
resA <- matrix(NA, nrow = 5, ncol = 5)
resA[2,4]=resA[2,5]=resA[3,4]=resA[3,5]=resA[4,2]=resA[4,3]=resA[4,5]=0
resA[5,2]=resA[5,3]=resA[5,4]=0
resA[1,1]=resA[2,2]=resA[3,3]=resA[4,4]=resA[5,5]=1
resA
model=VAR(vardata, p=2, type="const")
summary(model)
stt=matrix(0.1, nrow = 1, ncol = 10)
model1=SVAR(model, Amat=resA, lrtest=TRUE, estmethod="scoring", start=stt, conv.crit=0.0001, max.iter=500)
summary(model1)
irf.gap=irf(model1, impulse="gap", boot=FALSE, n.ahead=15, runs=100)
plot(irf.gap)
The problem is the last command IRF. It gives me reversed shape than Eviews. I'm think that since Eviews only mentions that it is using Cholesky Decomposition with df adjusted(this thing should be relevant to CI's) and "Response to Cholesky one S.D. Innovations +/- 2 S.E.", I guess the problem should be from the one SD and 2SE, still not sure how the R command "irf" does...
BTW, the package of R I'm using is library(vars), and for Eviews I used default setting for IRF.
Updated:
the problem happened because command irf computes the structural impulse response function which is different from Eviews' Cholesky decomposition.
Anyone would share any link with steps to manually compute Eviews version of IRF is really appreciated!
I have fitted a GARCH process to a time series and analyzed the ACF for squared and absolute residuals to check the model goodness of fit. But I also want to do a formal test and after searching the internet, The Weighted Portmanteau Test (originally by Li and Mak) seems to be the one.
It's from the WeightedPortTest package and is one of the few (perhaps the only one?) that properly tests the GARCH residuals.
While going through the instructions in various documents I can't wrap my head around what the "h.t" argument wants. It says in the info in R that I need to assign "a numeric vector of the conditional variances". This may be simple to an experienced user, though I'm struggling to understand. What is it that I need to do and preferably how would I code it in R?
Thankful for any kind of help
Taken directly from the documentation:
h.t: a numeric vector of the conditional variances
A little toy example using the fGarch package follows:
library(fGarch)
library(WeightedPortTest)
spec <- garchSpec(model = list(alpha = 0.6, beta = 0))
simGarch11 <- garchSim(spec, n = 300)
fit <- garchFit(formula = ~ garch(1, 0), data = simGarch11)
Weighted.LM.test(fit#residuals, fit#h.t, lag = 10)
And using garch() from the tseries package:
library(tseries)
fit2 <- garch(as.numeric(simGarch11), order = c(0, 1))
summary(fit2)
# comparison of fitted values:
tail(fit2$fitted.values[,1]^2)
tail(fit#h.t)
# comparison of residuals after unstandardizing:
unstd <- fit2$residuals*fit2$fitted.values[,1]
tail(unstd)
tail(fit#residuals)
Weighted.LM.test(unstd, fit2$fitted.values[,1]^2, lag = 10)
I'm trying to learn about ridge regression, and I am using R. From what I understand the following should be the same beta.r1 and beta.r2 in the code below are the same
library(MASS)
n=50
v1=runif(n)
v2=v1+2
V=cbind(1,v1,v2)
w=3+v1+v2
I=diag(3)
lambda=2 #arbitrarily chosen
beta.r1=solve(t(V)%*%V+lambda*I)%*%t(V)%*%w
#Using library(MASS)
fit=lm.ridge(w~v1+v2,lambda=2, Inter=FALSE)
beta.r2=coef(fit)
#Shouldn't beta.r1 and beta.r2 be the same?
I think the variable scaling performed in the lm.ridge code (which you can access by typing lm.ridge into your R console) that likely cause differences. The code scales each variable by its root-mean-squared value:
Xscale <- drop(rep(1/n, n) %*% X^2)^0.5
X <- X/rep(Xscale, rep(n, p))
Your code does not perform any variable scaling.
The variable scaling is hinted at on the ?lm.ridge help page in the description of what is returned by lm.ridge:
scales: scalings used on the X matrix.
Therefore you can access the scaling used by lm.ridge:
fit$scales
# v1 v2
# 0.2650311 0.2650311