Different results in R and Eviews for SVAR - r

I'm estimating a SVAR in R, but the A-B form results are very different than in Eviews, I'm not sure why it happened. Also, the option I think is right gives me error message. Could anyone help me?
Here is the R code I'm using:
resA <- matrix(NA, nrow = 5, ncol = 5)
resA[2,4]=resA[2,5]=resA[3,4]=resA[3,5]=resA[4,2]=resA[4,3]=resA[4,5]=0
resA[5,2]=resA[5,3]=resA[5,4]=0
resA[1,1]=resA[2,2]=resA[3,3]=resA[4,4]=resA[5,5]=1
resA
model=VAR(vardata, p=2, type="const")
summary(model)
stt=matrix(0.1, nrow = 1, ncol = 10)
model1=SVAR(model, Amat=resA, lrtest=TRUE, estmethod="scoring", start=stt, conv.crit=0.0001, max.iter=500)
summary(model1)
irf.gap=irf(model1, impulse="gap", boot=FALSE, n.ahead=15, runs=100)
plot(irf.gap)
The problem is the last command IRF. It gives me reversed shape than Eviews. I'm think that since Eviews only mentions that it is using Cholesky Decomposition with df adjusted(this thing should be relevant to CI's) and "Response to Cholesky one S.D. Innovations +/- 2 S.E.", I guess the problem should be from the one SD and 2SE, still not sure how the R command "irf" does...
BTW, the package of R I'm using is library(vars), and for Eviews I used default setting for IRF.
Updated:
the problem happened because command irf computes the structural impulse response function which is different from Eviews' Cholesky decomposition.
Anyone would share any link with steps to manually compute Eviews version of IRF is really appreciated!

Related

solving the cumulative function (CDF) using R

usually any package of any distribution in R contains three commands as for normal distribution we have rnorm, dnorm and pnorm. I found a package called kdevine enter link description here
and it has dkdevinecop and rkdevinecop but it doesn't have the option pkdevinecop (the CDF).
I tried to write it like this but it is wrong, could somebody see it please
library(kdevine)
library(kdecopula)
data(wdbc)
fit <- kdevine(wdbc[, 5:7], xmin = rep(0, 3))
f<-dkdevine(wdbc[, 5:7], fit)
for(i in 1:length(f)){
p<-sum(dkdevinecop(c(wdbc[, 5],wdbc[,6],wdbc[,7]),fit))
}
print (p)

H2O.GeneralizedLowRankModel objective is NA when passing loss by column

I am working with h2o glrm function. When I am trying to pass loss_by_col argument in order to specify different loss function for each column in my DataFrame (I have normal, poisson and binomial variables, so I am passing "Quadratic", "Poisson" and "Logistic" loss), the objective is not getting computed. The testmodel#model$objective returns NaN. But at the same time summary shows that there was few iterations made and objective was NA for all of them. The quality of model is very bad, but the archetypes are somehow computed. So I am confused. How should pass different loss for every variable in my dataset? Here is a (i hope) reproducible example:
df <- data.frame(p1 = rpois(100, 5), n1 = rnorm(100), b1 = rbinom(100, 1, 0.5))
df$b1 <- factor(df$b1)
h2df <- as.h2o(df)
testmodel <- h2o.glrm(h2df,
k=3,
loss_by_col=c("Poisson", "Quadratic", "Logistic"),
transform="STANDARDIZE")
testmodel#model$objective
summary(testmodel)
plot(testmodel)
Please note that there is a jira ticket for this here
It's interesting that you don't get an error when you run your code snippet. When I run your code snippet I get the following error:
Error: DistributedException from localhost/127.0.0.1:54321: 'Poisson loss L(u,a) requires variable a >= 0', caused by java.lang.AssertionError: Poisson loss L(u,a) requires variable a >= 0
I can resolve this error by removing transform="STANDARDIZE", because standardization can lead to negative values. For more information on what the transformations do you can take a look at the user guide here for your convenience here is the definition of how standardize gets used Standardize: Standardizing subtracts the mean and then divides each variable by its standard deviation.

Weighted Portmanteau Test for Fitted GARCH process

I have fitted a GARCH process to a time series and analyzed the ACF for squared and absolute residuals to check the model goodness of fit. But I also want to do a formal test and after searching the internet, The Weighted Portmanteau Test (originally by Li and Mak) seems to be the one.
It's from the WeightedPortTest package and is one of the few (perhaps the only one?) that properly tests the GARCH residuals.
While going through the instructions in various documents I can't wrap my head around what the "h.t" argument wants. It says in the info in R that I need to assign "a numeric vector of the conditional variances". This may be simple to an experienced user, though I'm struggling to understand. What is it that I need to do and preferably how would I code it in R?
Thankful for any kind of help
Taken directly from the documentation:
h.t: a numeric vector of the conditional variances
A little toy example using the fGarch package follows:
library(fGarch)
library(WeightedPortTest)
spec <- garchSpec(model = list(alpha = 0.6, beta = 0))
simGarch11 <- garchSim(spec, n = 300)
fit <- garchFit(formula = ~ garch(1, 0), data = simGarch11)
Weighted.LM.test(fit#residuals, fit#h.t, lag = 10)
And using garch() from the tseries package:
library(tseries)
fit2 <- garch(as.numeric(simGarch11), order = c(0, 1))
summary(fit2)
# comparison of fitted values:
tail(fit2$fitted.values[,1]^2)
tail(fit#h.t)
# comparison of residuals after unstandardizing:
unstd <- fit2$residuals*fit2$fitted.values[,1]
tail(unstd)
tail(fit#residuals)
Weighted.LM.test(unstd, fit2$fitted.values[,1]^2, lag = 10)

computing ridge estimate manually in R, simple

I'm trying to learn about ridge regression, and I am using R. From what I understand the following should be the same beta.r1 and beta.r2 in the code below are the same
library(MASS)
n=50
v1=runif(n)
v2=v1+2
V=cbind(1,v1,v2)
w=3+v1+v2
I=diag(3)
lambda=2 #arbitrarily chosen
beta.r1=solve(t(V)%*%V+lambda*I)%*%t(V)%*%w
#Using library(MASS)
fit=lm.ridge(w~v1+v2,lambda=2, Inter=FALSE)
beta.r2=coef(fit)
#Shouldn't beta.r1 and beta.r2 be the same?
I think the variable scaling performed in the lm.ridge code (which you can access by typing lm.ridge into your R console) that likely cause differences. The code scales each variable by its root-mean-squared value:
Xscale <- drop(rep(1/n, n) %*% X^2)^0.5
X <- X/rep(Xscale, rep(n, p))
Your code does not perform any variable scaling.
The variable scaling is hinted at on the ?lm.ridge help page in the description of what is returned by lm.ridge:
scales: scalings used on the X matrix.
Therefore you can access the scaling used by lm.ridge:
fit$scales
# v1 v2
# 0.2650311 0.2650311

How to get confidence intervals by bootstrapping for quantile regressions by default

In my statistics class we use Stata and since I'm an R user I want to do the same things in R. I've gotten the right results but it seems like a somewhat awkward way of getting something as simple as confidence intervals.
Here's my crude solution:
library(quantreg)
na = round(runif(100, min=127, max=144))
f <- rq(na~1, tau=.5, data=ds)
s <- summary.rq(f, se="boot", R=1000)
coef(s)[1]
coef(s)[1]+ c(-1,1)*1.96*coef(s)[2]
I've also experimented a little at the boot package but I haven't gotten it to work:
library(boot)
b <- boot(na, function(w, i){
rand_bootstrap_sample = w[i]
f <- rq(rand_bootstrap_sample~1, tau=.5)
return(coef(f))
}, R=100)
boot.ci(b)
Gives an error:
Error in bca.ci(boot.out, conf, index[1L], L = L, t = t.o, t0 = t0.o, :
estimated adjustment 'a' is NA
My questions:
What I wan't is to know if there is another better way of getting the confidence interval
why is the bootstrap code complaining?
Your example does not give an error message for me (Windows 7/64,R 2.14.2), so it could be a problem of random seeds. So if you post an example using some random method, better add a line set.seed; see example.
Note that the error message refers to the bca type of boot.ci; since this one often complains, deselect it by giving type explicitly.
I do not know exactly why you use the rather complex rq in the bootstrap. If you really wanted to profile rq, forget the simple example below, but please give some more details.
library(boot)
set.seed(4711)
na = round(runif(100, min=127, max=144))
b <- boot(na, function(w, i) median(w[i]), R=1000)
boot.ci(b,type=c("norm","basic","perc"))

Resources