Code:
library(nnet)
library(caret)
#K-folds resampling method for fitting model
ctrl <- trainControl(method = "repeatedcv", number = 10, repeats = 10,
allowParallel = TRUE) #10 separate 10-fold cross-validations
nnetGrid <- expand.grid(decay = seq(0.0002, .0008, length = 4),
size = seq(6, 10, by = 2),
bag = FALSE)
set.seed(100)
nnetFitcv <- train(R ~ .,
data = trainSet,
method = "avNNet",
tuneGrid = nnetGrid,
trControl = ctrl,
preProc = c("center", "scale"),
linout = TRUE,
## Reduce the amount of printed output
trace = FALSE,
## Expand the number of iterations to find
## parameter estimates..
maxit = 2000,
## and the number of parameters used by the model
MaxNWts = 5 * (34 + 1) + 5 + 1)
Error:
Error in train.default(x, y, weights = w, ...) :
final tuning parameters could not be determined
In addition: Warning messages:
1: In nominalTrainWorkflow(x = x, y = y, wts = weights, info = trainInfo, :
There were missing values in resampled performance measures.
2: In train.default(x, y, weights = w, ...) :
missing values found in aggregated results
data:
dput(head(trainSet))
structure(list(fy = c(317.913756282, 365.006253069, 392.548100067,
305.350697829, 404.999341917, 326.558279739), fu = c(538.962896683,
484.423120589, 607.974981919, 566.461909098, 580.287855801, 454.178316794
), E = c(194617.707566, 181322.455065, 206661.286272, 182492.029532,
189867.929239, 181991.379749), eu = c(0.153782620813, 0.208857408687,
0.29933255604, 0.277013319499, 0.251278125174, 0.20012525805),
imp_local = c(1555.3450957, 1595.41614044, 763.56392418,
1716.78277731, 1045.72429616, 802.742305814), imp_global = c(594.038972858,
1359.48216529, 1018.89209367, 850.887850177, 1381.3557372,
1714.66351462), teta1c = c(0.033375064111, 0.021482368218,
0.020905367537, 0.006956337817, 0.034913536977, 0.03009770223
), k1c = c(4000921.55552, 4499908.41979, 9764999.26902, 9273400.46159,
6163057.88855, 12338543.5703), k2_2L = c(98633499.5682, 53562216.5496,
51597126.6866, 79496746.0098, 54060378.6334, 88854286.5457
), k2_3L = c(53752551.0262, 125020222.794, 124021434.482,
125817803.431, 75021821.6702, 35160224.288), k2_4L = c(56725106.5978,
126865701.893, 145764489.664, 64837586.8755, 49128911.0832,
70088564.0166), bmaxc = c(3481281.32908, 4393584.00639, 2614830.02391,
3128593.72039, 3179348.29527, 4274637.35956), dfactorc = c(2.5474729895,
2.94296926288, 2.79505551368, 2.47882735165, 2.46407943564,
1.41121223341), amaxc = c(73832.9746763, 99150.5068997, 77165.4338508,
128546.996471, 53819.0447533, 54870.9707106), teta1s = c(0.015467320192,
0.013675755546, 0.031668366149, 0.028898297322, 0.019211801086,
0.013349768955), k1s = c(5049506.54552, 11250622.6842, 13852560.5089,
18813117.5726, 18362782.7372, 14720875.0829), k2_ab1s = c(276542468.441,
275768806.723, 211613299.608, 264475187.749, 162043062.526,
252936228.465), k2_ab2s = c(108971516.033, 114017918.32,
248886114.151, 213529935.615, 236891513.077, 142986118.909
), k2_ab3s = c(33306211.9166, 28220338.4744, 40462423.2281,
23450400.4429, 46044346.1128, 23695405.2598), bmaxab1 = c(4763935.86742,
4297372.01966, 3752983.00638, 4861240.46459, 4269771.8481,
4162098.23435), bmaxab2 = c(1864128.647, 1789714.6047, 2838412.50704,
2122535.96812, 2512362.60884, 1176995.61871), ab1 = c(66.4926766666,
42.7771212442, 45.4212664748, 50.3764074404, 35.4792060556,
34.1116517971), ab2 = c(21.0285105309, 23.5869838719, 18.8524808986,
10.1121885612, 10.9695055644, 12.1154127169), dfactors = c(2.47803921947,
0.874644748155, 0.749837099991, 1.96711589185, 2.5407774352,
1.28554379333), teta1f = c(0.037308451805, 0.035718600749,
0.012495093438, 0.000815957999, 0.002155991091, 0.02579104469
), k1f = c(14790480.9871, 17223538.1853, 19930679.8931, 3524230.46974,
15721827.0137, 13599317.0371), k2f = c(55614283.976, 54695745.7762,
86690362.7036, 99857853.7312, 63119072.711, 37510791.5472
), bmaxf = c(2094770.19484, 3633133.51482, 1361188.05421,
2001027.51219, 2534273.6726, 3765850.14143), dfactorf = c(0.745459795314,
2.04869176933, 0.853221909609, 1.76652410119, 0.523675021418,
1.0808768613), k2b = c(1956.92858062, 1400.78738327, 1771.23607857,
1104.05501369, 1756.6767193, 1509.9294956), amaxb = c(38588.0915097,
35158.1672213, 25711.062782, 21103.1603387, 27230.6973685,
43720.3558889999), dfactorb = c(0.822346959126, 2.34421354848,
0.79990635332, 2.99070447299, 1.76373031599, 1.38640223249
), roti = c(16.1560390049, 12.7223971386, 6.43238062144,
15.882552267, 16.0836252663, 18.2734832893), rotmaxbp = c(0.235615453341,
0.343204895932, 0.370304533553, 0.488746319999, 0.176135112774,
0.46921999001), R = c(0.022186087, 0.023768855, 0.023911029,
0.023935705, 0.023655335, 0.022402726)), .Names = c("fy",
"fu", "E", "eu", "imp_local", "imp_global", "teta1c", "k1c",
"k2_2L", "k2_3L", "k2_4L", "bmaxc", "dfactorc", "amaxc", "teta1s",
"k1s", "k2_ab1s", "k2_ab2s", "k2_ab3s", "bmaxab1", "bmaxab2",
"ab1", "ab2", "dfactors", "teta1f", "k1f", "k2f", "bmaxf", "dfactorf",
"k2b", "amaxb", "dfactorb", "roti", "rotmaxbp", "R"), row.names = c(7L,
8L, 20L, 23L, 28L, 29L), class = "data.frame")
data has no equal rows or zero values or NaNs. Any help is appreciated.
I guess the problem is caused by MaxNWts, which is The maximum allowable number of weights. The value you gave is less than the weights for networks with size larger than 5 units. It should be at least:
MaxNWts = max(nnetGrid$size)*(ncol(trainSet) + output_neron)
+ max(nnetGrid$size) + output_neron
So, in your case, it should be at least MaxNWts = 10 * (34 + 1) + 10 + 1
Related
I am trying to run the Monocle3 function find_gene_modules() on a cell_data_set (cds) but am getting a variety of errors in this. I have not had any other issues before this. I am working with an imported Seurat object. My first error came back stating that the number of rows were not the same between my cds and cds#preprocess_aux$gene_loadings values. I took a look and it seems my gene loadings were a list under cds#preprocess_aux#listData$gene_loadings. I then ran the following code to make a dataframe version of the gene loadings:
test <- seurat#assays$RNA#counts#Dimnames[[1]]
test <- as.data.frame(test)
cds#preprocess_aux$gene_loadings <- test
rownames(cds#preprocess_aux$gene_loadings) <- cds#preprocess_aux$gene_loadings[,1]
Which created a cds#preprocess_aux$gene_loadings dataframe with the same number of rows and row names as my cds. This resolved my original error but now led to a new error being thrown from uwot as:
15:34:02 UMAP embedding parameters a = 1.577 b = 0.8951
Error in uwot(X = X, n_neighbors = n_neighbors, n_components = n_components, :
No numeric columns found
Running traceback() produces the following information.
> traceback()
4: stop("No numeric columns found")
3: uwot(X = X, n_neighbors = n_neighbors, n_components = n_components,
metric = metric, n_epochs = n_epochs, alpha = learning_rate,
scale = scale, init = init, init_sdev = init_sdev, spread = spread,
min_dist = min_dist, set_op_mix_ratio = set_op_mix_ratio,
local_connectivity = local_connectivity, bandwidth = bandwidth,
gamma = repulsion_strength, negative_sample_rate = negative_sample_rate,
a = a, b = b, nn_method = nn_method, n_trees = n_trees, search_k = search_k,
method = "umap", approx_pow = approx_pow, n_threads = n_threads,
n_sgd_threads = n_sgd_threads, grain_size = grain_size, y = y,
target_n_neighbors = target_n_neighbors, target_weight = target_weight,
target_metric = target_metric, pca = pca, pca_center = pca_center,
pca_method = pca_method, pcg_rand = pcg_rand, fast_sgd = fast_sgd,
ret_model = ret_model || "model" %in% ret_extra, ret_nn = ret_nn ||
"nn" %in% ret_extra, ret_fgraph = "fgraph" %in% ret_extra,
batch = batch, opt_args = opt_args, epoch_callback = epoch_callback,
tmpdir = tempdir(), verbose = verbose)
2: uwot::umap(as.matrix(preprocess_mat), n_components = max_components,
metric = umap.metric, min_dist = umap.min_dist, n_neighbors = umap.n_neighbors,
fast_sgd = umap.fast_sgd, n_threads = cores, verbose = verbose,
nn_method = umap.nn_method, ...)
1: find_gene_modules(cds[pr_deg_ids, ], reduction_method = "UMAP",
max_components = 2, umap.metric = "cosine", umap.min_dist = 0.1,
umap.n_neighbors = 15L, umap.fast_sgd = FALSE, umap.nn_method = "annoy",
k = 20, leiden_iter = 1, partition_qval = 0.05, weight = FALSE,
resolution = 0.001, random_seed = 0L, cores = 1, verbose = T)
I really have no idea what I am doing wrong or how to proceed from here. Does anyone with experience with uwot know where my error is coming from? Really appreciate the help!
This is how my data looks like:
> dput(head(GDP_NUTS2,5))
structure(list(Regiao = c("T", "N", "Ag", "C", "AML"), t2000 = c(12529.42964,
10054.60679, 13045.59069, 10621.51789, 18104.36306), t2001 = c(13142.7713,
10652.46712, 13920.41552, 11101.08412, 18865.55149), t2002 = c(13714.17406,
11001.34917, 14612.37052, 11507.36163, 19812.29293), t2003 = c(13985.02689,
11031.7278, 15137.89461, 11884.96687, 20165.68892), t2004 = c(14537.15966,
11354.02317, 15479.68985, 12364.05053, 21068.05117), t2005 = c(15107.92333,
11875.44359, 16237.49791, 12754.40299, 21829.31373), t2006 = c(15816.27567,
12439.6426, 17046.29326, 13378.47797, 22714.25829), t2007 = c(16660.99538,
13229.02402, 17981.40383, 14044.39707, 23847.44923), t2008 = c(16971.19746,
13579.51144, 18226.74178, 14091.85326, 24347.83971), t2009 = c(16606.6617,
13243.19054, 17038.45595, 13974.46502, 23794.44899), t2010 = c(16986.91604,
13677.38358, 16976.83391, 14284.14565, 24119.66719), t2011 = c(16655.71238,
13491.68626, 16347.69468, 14011.54637, 23503.1765), t2012 = c(15963.69251,
13111.6173, 16059.51047, 13623.68635, 22118.01701), t2013 = c(16257.04222,
13473.68717, 16301.87448, 13919.18355, 22337.24739), t2014 = c(16596.21219,
13935.07757, 16974.57715, 14220.1043, 22491.62875), t2015 = c(17322.0514,
14570.33755, 17851.78088, 14983.95312, 23101.89351), t2016 = c(18033.44444,
15283.33044, 19251.57661, 15620.77307, 23800.20038), t2017 = c(19006.33518,
16083.53849, 20893.19975, 16410.11278, 24938.22636), t2018 = c(19938.15583,
17031.94867, 22131.96942, 17242.70015, 25974.24055), t2019 = c(20755.955,
17712.44223, 23145.30242, 18045.54697, 26970.71178)), row.names = c(NA,
-5L), class = c("tbl_df", "tbl", "data.frame"))
I'm using the "REAT" package to test the absolute beta convergence comparing years 2000 (t2000) and 2019 (t2019) with OLS (Ordinary Least Squares) estimation using function betaconv.ols().
I've used this code: betaconv.ols(GDP_NUTS2$t2000, 2000, GDP_NUTS2$t2019, 2019, output.results = TRUE) I tried other version of the code but my major problem is the output.results=TRUE because I get always this error: Error in betaconv.ols(GDP_NUTS2$t2000, 2000, GDP_NUTS2$t2019, 2019, output.results = TRUE) : unused argument (output.results = TRUE)
I've been searching for alternatives of output.results but no success.
Any help will be much appreciated.
The argument is print.results based on the args of the function
> args(betaconv.ols)
function (gdp1, time1, gdp2, time2, conditions = NULL, beta.plot = FALSE,
beta.plotPSize = 1, beta.plotPCol = "black", beta.plotLine = FALSE,
beta.plotLineCol = "red", beta.plotX = "Ln (initial)", beta.plotY = "Ln (growth)",
beta.plotTitle = "Beta convergence", beta.bgCol = "gray95",
beta.bgrid = TRUE, beta.bgridCol = "white", beta.bgridSize = 2,
beta.bgridType = "solid", print.results = FALSE)
NULL
betaconv.ols(GDP_NUTS2$t2000, 2000, GDP_NUTS2$t2019, 2019, print.results = TRUE)
-output
Absolute Beta Convergence
Model coefficients (Estimation method: OLS)
Estimate Std. Error t value Pr (>|t|)
Alpha 1.537689e-01 0.048509886 3.169847 0.05048663
Beta -1.341938e-02 0.005137275 -2.612158 0.07953682
Lambda 7.110647e-04 NA NA NA
Halflife 9.748018e+02 NA NA NA
Model summary
Estimate F value df 1 df 2 Pr (>F)
R-Squared 0.6946059 6.823372 1 3 0.07953682
I would like to tune "classif.h2o.deeplearning" learner via mlr. During the tuning I have several architectures I would like explored. For each of these architectures I would like to specify a dropout space. However I am struggling with this.
Example:
library(mlr)
library(h2o)
ctrl <- makeTuneControlRandom(maxit = 10)
lrn <- makeLearner("classif.h2o.deeplearning", predict.type = "prob")
I define two architectures "a" and "b" via the "hidden" DiscreteParam, for each of them I would like to create a NumericVectorParam of "hidden_dropout_ratios"
par_set <- makeParamSet(
makeDiscreteParam("hidden", values = list(a = c(16L, 16L),
b = c(16L, 16L, 16L))),
makeDiscreteParam("activation", values = "RectifierWithDropout", tunable = FALSE),
makeNumericParam("input_dropout_ratio", lower = 0, upper = 0.4, default = 0.1),
makeNumericVectorParam("hidden_dropout_ratios", len = 2, lower = 0, upper = 0.6, default = rep(0.3, 2),
requires = quote(length(hidden) == 2)),
makeNumericVectorParam("hidden_dropout_ratios", len = 3, lower = 0, upper = 0.6, default = rep(0.3, 3),
requires = quote(length(hidden) == 3)))
this produces an error:
Error in makeParamSet(makeDiscreteParam("hidden", values = list(a = c(16L, :
All parameters must have unique names!
Setting just one of them results in dropout being applied only on architectures of appropriate number of hidden layers.
When I attempt to use the same dropout for all hidden layers:
par_set <- makeParamSet(
makeDiscreteParam("hidden", values = list(a = c(16L, 16L),
b = c(16L, 16L, 16L))),
makeDiscreteParam("activation", values = "RectifierWithDropout", tunable = FALSE),
makeNumericParam("input_dropout_ratio", lower = 0, upper = 0.4, default = 0.1),
makeNumericParam("hidden_dropout_ratios", lower = 0, upper = 0.6, default = 0.3))
tw <- makeTuneWrapper(lrn,
resampling = cv3,
control = ctrl,
par.set = par_set,
show.info = TRUE,
measures = list(auc,
bac))
perf_tw <- resample(tw,
task = sonar.task,
resampling = cv5,
extract = getTuneResult,
models = TRUE,
show.info = TRUE,
measures = list(auc,
bac))
I get the error:
Error in .h2o.doSafeREST(h2oRestApiVersion = h2oRestApiVersion, urlSuffix = page, :
ERROR MESSAGE:
Illegal argument(s) for DeepLearning model: DeepLearning_model_R_1566289564965_2. Details: ERRR on field: _hidden_dropout_ratios: Must have 3 hidden layer dropout ratios.
Perhaps I could overcome this by creating a separate learner for each architecture and then combining with makeModelMultiplexer?
I would like your help in overcoming this. Thanks.
EDIT: I was able to overcome this using makeModelMultiplexer and by creating a learner for each architecture (number of hidden layers).
base_lrn <- list(
makeLearner("classif.h2o.deeplearning",
id = "h20_2",
predict.type = "prob"),
makeLearner("classif.h2o.deeplearning",
id = "h20_3",
predict.type = "prob"))
mm_lrn <- makeModelMultiplexer(base_lrn)
par_set <- makeParamSet(
makeDiscreteParam("selected.learner", values = extractSubList(base_lrn, "id")),
makeDiscreteParam("h20_2.hidden", values = list(a = c(16L, 16L),
b = c(32L, 32L)),
requires = quote(selected.learner == "h20_2")),
makeDiscreteParam("h20_3.hidden", values = list(a = c(16L, 16L, 16L),
b = c(32L, 32L, 32L)),
requires = quote(selected.learner == "h20_3")),
makeDiscreteParam("h20_2.activation", values = "RectifierWithDropout", tunable = FALSE,
requires = quote(selected.learner == "h20_2")),
makeDiscreteParam("h20_3.activation", values = "RectifierWithDropout", tunable = FALSE,
requires = quote(selected.learner == "h20_3")),
makeNumericParam("h20_2.input_dropout_ratio", lower = 0, upper = 0.4, default = 0.1,
requires = quote(selected.learner == "h20_2")),
makeNumericParam("h20_3.input_dropout_ratio", lower = 0, upper = 0.4, default = 0.1,
requires = quote(selected.learner == "h20_3")),
makeNumericVectorParam("h20_2.hidden_dropout_ratios", len = 2, lower = 0, upper = 0.6, default = rep(0.3, 2),
requires = quote(selected.learner == "h20_2")),
makeNumericVectorParam("h20_3.hidden_dropout_ratios", len = 3, lower = 0, upper = 0.6, default = rep(0.3, 3),
requires = quote(selected.learner == "h20_3")))
tw <- makeTuneWrapper(mm_lrn,
resampling = cv3,
control = ctrl,
par.set = par_set,
show.info = TRUE,
measures = list(auc,
bac))
perf_tw <- resample(tw,
task = sonar.task,
resampling = cv5,
extract = getTuneResult,
models = TRUE,
show.info = TRUE,
measures = list(auc,
bac))
Is there a more elegant solution?
I've no experience with h2o learners or their deep learning approach.
However, specifying the same parameter twice in a single ParamSet (as your first try) won't work. So you will always need to use two ParamSets anyways.
I cannot say anything about the second error you are getting. This looks like a h2o related problem.
Using makeModelMultiplexer() is one option. You can also use single benchmark() calls and aggregate them afterwards.
Code:
library(caret)
#adaptative control resampling method for fitting svr
ctrlada <- trainControl(method = "adaptive_cv", number = 10, returnResamp = "final",
adaptive = list(min = 5,
alpha = 0.05,
method = "gls",
complete = TRUE),
allowParallel = TRUE) #10 separate 10-fold cross-validations are used as the resampling scheme
set.seed(100)
marsFitacv <- train(R ~ ., data = trainSet,
method = "earth",
tuneGrid = expand.grid(degree = 2, nprune = 40:80),
trControl = ctrlada)
error:
x parameter filtering failed
Error in `$<-.data.frame`(`*tmp*`, "nprune", value = NA) :
replacement has 1 row, data has 0
data:
dput(head(trainSet))
structure(list(fy = c(317.913756282, 365.006253069, 392.548100067,
305.350697829, 404.999341917, 326.558279739), fu = c(538.962896683,
484.423120589, 607.974981919, 566.461909098, 580.287855801, 454.178316794
), E = c(194617.707566, 181322.455065, 206661.286272, 182492.029532,
189867.929239, 181991.379749), eu = c(0.153782620813, 0.208857408687,
0.29933255604, 0.277013319499, 0.251278125174, 0.20012525805),
imp_local = c(1555.3450957, 1595.41614044, 763.56392418,
1716.78277731, 1045.72429616, 802.742305814), imp_global = c(594.038972858,
1359.48216529, 1018.89209367, 850.887850177, 1381.3557372,
1714.66351462), teta1c = c(0.033375064111, 0.021482368218,
0.020905367537, 0.006956337817, 0.034913536977, 0.03009770223
), k1c = c(4000921.55552, 4499908.41979, 9764999.26902, 9273400.46159,
6163057.88855, 12338543.5703), k2_2L = c(98633499.5682, 53562216.5496,
51597126.6866, 79496746.0098, 54060378.6334, 88854286.5457
), k2_3L = c(53752551.0262, 125020222.794, 124021434.482,
125817803.431, 75021821.6702, 35160224.288), k2_4L = c(56725106.5978,
126865701.893, 145764489.664, 64837586.8755, 49128911.0832,
70088564.0166), bmaxc = c(3481281.32908, 4393584.00639, 2614830.02391,
3128593.72039, 3179348.29527, 4274637.35956), dfactorc = c(2.5474729895,
2.94296926288, 2.79505551368, 2.47882735165, 2.46407943564,
1.41121223341), amaxc = c(73832.9746763, 99150.5068997, 77165.4338508,
128546.996471, 53819.0447533, 54870.9707106), teta1s = c(0.015467320192,
0.013675755546, 0.031668366149, 0.028898297322, 0.019211801086,
0.013349768955), k1s = c(5049506.54552, 11250622.6842, 13852560.5089,
18813117.5726, 18362782.7372, 14720875.0829), k2_ab1s = c(276542468.441,
275768806.723, 211613299.608, 264475187.749, 162043062.526,
252936228.465), k2_ab2s = c(108971516.033, 114017918.32,
248886114.151, 213529935.615, 236891513.077, 142986118.909
), k2_ab3s = c(33306211.9166, 28220338.4744, 40462423.2281,
23450400.4429, 46044346.1128, 23695405.2598), bmaxab1 = c(4763935.86742,
4297372.01966, 3752983.00638, 4861240.46459, 4269771.8481,
4162098.23435), bmaxab2 = c(1864128.647, 1789714.6047, 2838412.50704,
2122535.96812, 2512362.60884, 1176995.61871), ab1 = c(66.4926766666,
42.7771212442, 45.4212664748, 50.3764074404, 35.4792060556,
34.1116517971), ab2 = c(21.0285105309, 23.5869838719, 18.8524808986,
10.1121885612, 10.9695055644, 12.1154127169), dfactors = c(2.47803921947,
0.874644748155, 0.749837099991, 1.96711589185, 2.5407774352,
1.28554379333), teta1f = c(0.037308451805, 0.035718600749,
0.012495093438, 0.000815957999, 0.002155991091, 0.02579104469
), k1f = c(14790480.9871, 17223538.1853, 19930679.8931, 3524230.46974,
15721827.0137, 13599317.0371), k2f = c(55614283.976, 54695745.7762,
86690362.7036, 99857853.7312, 63119072.711, 37510791.5472
), bmaxf = c(2094770.19484, 3633133.51482, 1361188.05421,
2001027.51219, 2534273.6726, 3765850.14143), dfactorf = c(0.745459795314,
2.04869176933, 0.853221909609, 1.76652410119, 0.523675021418,
1.0808768613), k2b = c(1956.92858062, 1400.78738327, 1771.23607857,
1104.05501369, 1756.6767193, 1509.9294956), amaxb = c(38588.0915097,
35158.1672213, 25711.062782, 21103.1603387, 27230.6973685,
43720.3558889999), dfactorb = c(0.822346959126, 2.34421354848,
0.79990635332, 2.99070447299, 1.76373031599, 1.38640223249
), roti = c(16.1560390049, 12.7223971386, 6.43238062144,
15.882552267, 16.0836252663, 18.2734832893), rotmaxbp = c(0.235615453341,
0.343204895932, 0.370304533553, 0.488746319999, 0.176135112774,
0.46921999001), R = c(0.022186087, 0.023768855, 0.023911029,
0.023935705, 0.023655335, 0.022402726)), .Names = c("fy",
"fu", "E", "eu", "imp_local", "imp_global", "teta1c", "k1c",
"k2_2L", "k2_3L", "k2_4L", "bmaxc", "dfactorc", "amaxc", "teta1s",
"k1s", "k2_ab1s", "k2_ab2s", "k2_ab3s", "bmaxab1", "bmaxab2",
"ab1", "ab2", "dfactors", "teta1f", "k1f", "k2f", "bmaxf", "dfactorf",
"k2b", "amaxb", "dfactorb", "roti", "rotmaxbp", "R"), row.names = c(7L,
8L, 20L, 23L, 28L, 29L), class = "data.frame")
Data has no equal rows or NaNs
I am looking at the ugarchboot function in rugarch but I am having trouble getting the Series (summary) into a dataframe.
library(rugarch)
data(dji30ret)
spec = ugarchspec(variance.model=list(model="gjrGARCH", garchOrder=c(1,1)),
mean.model=list(armaOrder=c(1,1), arfima=FALSE, include.mean=TRUE,
archm = FALSE, archpow = 1), distribution.model="std")
ctrl = list(tol = 1e-7, delta = 1e-9)
fit = ugarchfit(data=dji30ret[, "BA", drop = FALSE], out.sample = 0,
spec = spec, solver = "solnp", solver.control = ctrl,
fit.control = list(scale = 1))
bootpred = ugarchboot(fit, method = "Partial", n.ahead = 120, n.bootpred = 2000)
bootpred
as.data.frame(bootpred, which = "sigma", type = "q", qtile = c(0.01, 0.05))
##I am tring to get this into a dataframe:
Series (summary):
min q.25 mean q.75 max forecast
t+1 -0.24531 -0.016272 0.000143 0.018591 0.16263 0.000743
t+2 -0.24608 -0.018006 -0.000290 0.017816 0.16160 0.000232
t+3 -0.24333 -0.017131 0.001007 0.017884 0.31861 0.000413
t+4 -0.26126 -0.018643 -0.000618 0.017320 0.34078 0.000349
t+5 -0.19406 -0.018545 -0.000453 0.016690 0.33356 0.000372
t+6 -0.23864 -0.017268 -0.000113 0.016001 0.18233 0.000364
t+7 -0.27024 -0.018031 -0.000514 0.017852 0.18436 0.000367
t+8 -0.13926 -0.016676 0.000539 0.017904 0.16271 0.000366
t+9 -0.32941 -0.017221 -0.000194 0.016718 0.13894 0.000366
t+10 -0.19013 -0.015845 0.001095 0.017064 0.14498 0.000366
Thank you for your help.