I am studying the effects of skewness and kurtosis on the Pearson corrections to bivariate correlations for range restriction. Currently I am using R and "rcorrvar" as it should allow me to generate correlated vectors with a specifiable skew and kurtosis. When I run it as below
rcorrvar(n = 100, k_cont = 2, k_CAT = 2,pois = 2, k_nb = 0,
method = c("Fleishman", "Polynomial"), means = 0, vars = 1,
skews = 2,skurts = 4,fifths = NULL, sixths = NULL,
Six = list(), marginal = list(), support = list(), nrand = 100,
lam = NULL, size = NULL, prob = NULL, mu = NULL, Sigma = NULL,
rho = NULL, cstart = NULL, seed = 1234, errorloop = FALSE,
epsilon = 0.001, maxit = 1000, extra_correct = TRUE)
Error in rcorrvar(n = 100, k_cont = 2, k_CAT = 2, pois = 2, k_nb = 0, :
unused arguments (k_CAT = 2, pois = 2)
How do I correct these errors?
Assuming that the rcorrvar function you're using is from the SimMultiCorrData package, it appears as though you may have misspelled the two variables - they're supposed to be k_cat and k_pois.
Please note that R's variables are case-sensitive.
Related
I use the R package fPortfolio for portfolio optimization for a rolling portfolio (adaptive asset allocation). Therefore, I use the backtesting function.
I aim at constructing a portfolio for a set of assets for a predefined target return (and minimized risk) or for a predefined target risk and maximized returns.
Even allowing for short selling (as proposed in another post 5 years ago) seems not to work. Besides, I do not want to allow for short selling in my approach.
I cannot figure out why changing values for target return or target risk do not influence the solution at all.
Where do I go wrong?
require(quantmod)
require(fPortfolio)
require(PortfolioAnalytics)
tickers= c("SPY","TLT","GLD","VEIEX","QQQ","SHY")
getSymbols(tickers)
data.raw = as.timeSeries(na.omit(cbind(Ad(SPY),Ad(TLT),Ad(GLD),Ad(VEIEX),Ad(QQQ),Ad(SHY))))
data.arith = na.omit(Return.calculate(data.raw, method="simple"))
colnames(data.arith) = c("SPY","TLT","GLD","VEIEX","QQQ","SHY")
cvarSpec <- portfolioSpec(
model = list(
type = "CVAR",
optimize = "maxReturn",
estimator = "covEstimator",
tailRisk = list(),
params = list(alpha = 0.05, a = 1)),
portfolio = list(
weights = NULL,
targetReturn = NULL,
targetRisk = 0.08,
riskFreeRate = 0,
nFrontierPoints = 50,
status = 0),
optim = list(
solver = "solveRglpk.CVAR",
objective = NULL,
params = list(),
control = list(),
trace = FALSE))
backtest = portfolioBacktest()
setWindowsHorizon(backtest) = "12m"
assets <- SPY ~ SPY + TLT + GLD + VEIEX + QQQ + SHY
portConstraints ="LongOnly"
myPortfolio = portfolioBacktesting(
formula = assets,
data = data.arith,
spec = cvarSpec,
constraints = portConstraints,
backtest = backtest,
trace = TRUE)
setSmootherLambda(myPortfolio$backtest) <- "1m"
myPortfolioSmooth <- portfolioSmoothing(myPortfolio)
backtestPlot(myPortfolioSmooth, cex = 0.6, font = 1, family = "mono")
how can I impose constraints on the coefficient matrix of a var model in r.
Some of my code is followed
library(readxl)
dat_pc_log_d <- read_excel("C:/Users/Desktop/dat_pc_log_d.xlsx")
attach(dat_pc_log_d)
dat_pc_log_d$itcrm = NULL
dat_pc_log_d$...1 = NULL
data = ts(dat_pc_log_d,start = c(2004,1),end = c(2019,1),frequency = 4)
VAR_modelo = VAR(data,p=2)
VAR_modelo_restriccion = restrict(VAR_modelo,method = "ser",thresh = 2.0)
ir_pib = irf(VAR_modelo_restriccion, impulse = "pbipc_log_d", response = c("pbipc_log_d", "expopc_log_d", "pbiagr_log_d"),
boot = TRUE, ci = 0.95)
I need to ensure exogeneity of a variable, for it I have to impose zero in some lags coefficients of the independent variable. How can I do it ?
thanks
library(readxl)
dat_pc_log_d <- read_excel("C:/Users//dat_pc_log_d.xlsx")
attach(dat_pc_log_d)
dat_pc_log_d$...1 = NULL
data = ts(dat_pc_log_d,start = c(2004,1),end = c(2019,1),frequency = 4)
VAR_modelo = VAR(data,p=2)
restriccion = matrix(c(1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,
1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,
1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,
1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,
1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,
1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,
1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,
0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,1,1),
nrow=8, ncol=17, byrow = TRUE)
VAR_modelo_restriccion = restrict(VAR_modelo,method = "man", resmat = restriccion)
ir_pib = irf(VAR_modelo_restriccion, impulse = "itcrm", response = c("pbipc_log_d", "expopc_log_d", "inverpc_log_d" , "pbiagr_log_d"),
boot = TRUE, nhead=20 ,ci = 0.68)
I would like to tune "classif.h2o.deeplearning" learner via mlr. During the tuning I have several architectures I would like explored. For each of these architectures I would like to specify a dropout space. However I am struggling with this.
Example:
library(mlr)
library(h2o)
ctrl <- makeTuneControlRandom(maxit = 10)
lrn <- makeLearner("classif.h2o.deeplearning", predict.type = "prob")
I define two architectures "a" and "b" via the "hidden" DiscreteParam, for each of them I would like to create a NumericVectorParam of "hidden_dropout_ratios"
par_set <- makeParamSet(
makeDiscreteParam("hidden", values = list(a = c(16L, 16L),
b = c(16L, 16L, 16L))),
makeDiscreteParam("activation", values = "RectifierWithDropout", tunable = FALSE),
makeNumericParam("input_dropout_ratio", lower = 0, upper = 0.4, default = 0.1),
makeNumericVectorParam("hidden_dropout_ratios", len = 2, lower = 0, upper = 0.6, default = rep(0.3, 2),
requires = quote(length(hidden) == 2)),
makeNumericVectorParam("hidden_dropout_ratios", len = 3, lower = 0, upper = 0.6, default = rep(0.3, 3),
requires = quote(length(hidden) == 3)))
this produces an error:
Error in makeParamSet(makeDiscreteParam("hidden", values = list(a = c(16L, :
All parameters must have unique names!
Setting just one of them results in dropout being applied only on architectures of appropriate number of hidden layers.
When I attempt to use the same dropout for all hidden layers:
par_set <- makeParamSet(
makeDiscreteParam("hidden", values = list(a = c(16L, 16L),
b = c(16L, 16L, 16L))),
makeDiscreteParam("activation", values = "RectifierWithDropout", tunable = FALSE),
makeNumericParam("input_dropout_ratio", lower = 0, upper = 0.4, default = 0.1),
makeNumericParam("hidden_dropout_ratios", lower = 0, upper = 0.6, default = 0.3))
tw <- makeTuneWrapper(lrn,
resampling = cv3,
control = ctrl,
par.set = par_set,
show.info = TRUE,
measures = list(auc,
bac))
perf_tw <- resample(tw,
task = sonar.task,
resampling = cv5,
extract = getTuneResult,
models = TRUE,
show.info = TRUE,
measures = list(auc,
bac))
I get the error:
Error in .h2o.doSafeREST(h2oRestApiVersion = h2oRestApiVersion, urlSuffix = page, :
ERROR MESSAGE:
Illegal argument(s) for DeepLearning model: DeepLearning_model_R_1566289564965_2. Details: ERRR on field: _hidden_dropout_ratios: Must have 3 hidden layer dropout ratios.
Perhaps I could overcome this by creating a separate learner for each architecture and then combining with makeModelMultiplexer?
I would like your help in overcoming this. Thanks.
EDIT: I was able to overcome this using makeModelMultiplexer and by creating a learner for each architecture (number of hidden layers).
base_lrn <- list(
makeLearner("classif.h2o.deeplearning",
id = "h20_2",
predict.type = "prob"),
makeLearner("classif.h2o.deeplearning",
id = "h20_3",
predict.type = "prob"))
mm_lrn <- makeModelMultiplexer(base_lrn)
par_set <- makeParamSet(
makeDiscreteParam("selected.learner", values = extractSubList(base_lrn, "id")),
makeDiscreteParam("h20_2.hidden", values = list(a = c(16L, 16L),
b = c(32L, 32L)),
requires = quote(selected.learner == "h20_2")),
makeDiscreteParam("h20_3.hidden", values = list(a = c(16L, 16L, 16L),
b = c(32L, 32L, 32L)),
requires = quote(selected.learner == "h20_3")),
makeDiscreteParam("h20_2.activation", values = "RectifierWithDropout", tunable = FALSE,
requires = quote(selected.learner == "h20_2")),
makeDiscreteParam("h20_3.activation", values = "RectifierWithDropout", tunable = FALSE,
requires = quote(selected.learner == "h20_3")),
makeNumericParam("h20_2.input_dropout_ratio", lower = 0, upper = 0.4, default = 0.1,
requires = quote(selected.learner == "h20_2")),
makeNumericParam("h20_3.input_dropout_ratio", lower = 0, upper = 0.4, default = 0.1,
requires = quote(selected.learner == "h20_3")),
makeNumericVectorParam("h20_2.hidden_dropout_ratios", len = 2, lower = 0, upper = 0.6, default = rep(0.3, 2),
requires = quote(selected.learner == "h20_2")),
makeNumericVectorParam("h20_3.hidden_dropout_ratios", len = 3, lower = 0, upper = 0.6, default = rep(0.3, 3),
requires = quote(selected.learner == "h20_3")))
tw <- makeTuneWrapper(mm_lrn,
resampling = cv3,
control = ctrl,
par.set = par_set,
show.info = TRUE,
measures = list(auc,
bac))
perf_tw <- resample(tw,
task = sonar.task,
resampling = cv5,
extract = getTuneResult,
models = TRUE,
show.info = TRUE,
measures = list(auc,
bac))
Is there a more elegant solution?
I've no experience with h2o learners or their deep learning approach.
However, specifying the same parameter twice in a single ParamSet (as your first try) won't work. So you will always need to use two ParamSets anyways.
I cannot say anything about the second error you are getting. This looks like a h2o related problem.
Using makeModelMultiplexer() is one option. You can also use single benchmark() calls and aggregate them afterwards.
This is my first post here so I am not quite sure how to frame a question here but I will try my best
I am trying to forecast densities of daily exchange rates, I have chosen EUR/USD as my currency pair that I'd like to forecast. I am using GARCH models to do the forecast. I have done my coding using "rugarch" package. The code looks like this
> ex1 <- as.xts(DEXUSEU) #DEXUSEU is the daily data of exchange rates
> ex2 <- ex1[!is.na(ex1)]
> lex2 <- 100*log((ex2[2:Nd,])/(ex2[1:(Nd-1),])) #taking log differences, it
has 4496 observations
> model1=ugarchspec (variance.model = list(model = "sGARCH", garchOrder = c(1,
1), submodel = NULL, external.regressors = NULL, variance.targeting =
FALSE),mean.model = list(armaOrder = c(0, 0), include.mean = TRUE,
archm = FALSE, archpow = 1, arfima = FALSE, external.regressors = NULL,
archex = FALSE),distribution.model = "std") #specifying garch model with
student t distribution
> modelfit1=ugarchfit(model,data=lex2,out.sample = 2000) #fitting model
> modelroll1=ugarchroll (
model1, data=lex2, n.ahead = 1, forecast.length = 2000,
n.start = NULL, refit.every = 50, refit.window = c("rolling"),
window.size = NULL, solver = "hybrid", fit.control = list(),
solver.control = list(), calculate.VaR = TRUE, VaR.alpha = 0.01,
cluster = NULL, keep.coef = TRUE) #doing rolling window forecast
> plot(modelroll1,which=1)
Thats how the density forecast looks like, and i am quite sure that something is wrong here, it shouldn't look like this:
Can anybody please help and tell me what I did wrong. I can provide additional data/information if needed. I am just not sure what else to provide as this is my first post here. Any kind of help would be very much appreciated.
I'm using the EnvStats package more specifically the simulateVector function to generate random samples from pdf's.
I've tried using a Normal pdf and varying the parameters that truncate this pdf:
> vfy <- simulateVector(10, distribution = "norm",
+ param.list = list(mean = 400, sd = 40), seed = 47,
+ sort = FALSE, left.tail.cutoff = 1, right.tail.cutoff = 1)
> vfy
[1] 479.7879 428.4457 407.4162 388.7294 404.3510 356.5705 360.5807 400.6052 389.9182 341.3700
> vfy <- simulateVector(10, distribution = "norm",
+ param.list = list(mean = 400, sd = 40), seed = 47,
+ sort = FALSE, left.tail.cutoff = 0, right.tail.cutoff = 0)
> vfy
[1] 479.7879 428.4457 407.4162 388.7294 404.3510 356.5705 360.5807 400.6052 389.9182 341.3700
For my surprise the results do not vary.... What's wrong? Thanks
The left.tail.cutoff and right.tail.cutoff arguments are only relevant when you use sample.method = "LHS" for Latin Hypercube sampling.
The default is sample.method = "SRS" for simple random sampling, which uses the rnomr() function. The help file states "This argument is ignored if sample.method="SRS"."
See ?simulateVector() for the default arguments.