Extracting Confidence Intervals from Forecast Fable - r

I'm encountering an issue attempting to extract the 90/95% confidence intervals resulting from a forecast model built from a key variable holding 5 groups across a total of 4 forecasting models.
The primary problem is that I'm not familiar with how R treats and works with dist and hilo object types.
The original tsibble has a structure of 60 months for each of the 5 groups (300 observations)
>groups
# A tsibble: 300 x 3 [1M]
# Key: Group [5]
Month Group Measure
<mth> <chr> <dbl>
1 2016 May Group1 8.75
2 2016 Jun Group1 8.5
3 2016 Jul Group1 7
4 2016 Aug Group1 10
5 2016 Sep Group1 2
6 2016 Oct Group1 6
7 2016 Nov Group1 8
8 2016 Dec Group1 0
9 2017 Jan Group1 16
10 2017 Feb Group1 9
... with 290 more rows
I form a model with different forecast methods, as well as a combination model:
groups%>%model(ets=ETS(Measure),
mean=MEAN(Measure),
snaive=SNAIVE(Measure))%>%mutate(combination=(ets+mean+snaive)/3)->groups_avg
This results in a mable of the structure
>groups_avg
# A mable: 5 x 5
# Key: Group [5]
Group ets mean snaive combination
<chr> <model> <mode> <model> <model>
1 Group1 <ETS(A,N,N)> <MEAN> <SNAIVE> <COMBINATION>
2 Group2 <ETS(A,N,N)> <MEAN> <SNAIVE> <COMBINATION>
3 Group3 <ETS(M,N,N)> <MEAN> <SNAIVE> <COMBINATION>
4 Group4 <ETS(A,N,N)> <MEAN> <SNAIVE> <COMBINATION>
5 Group5 <ETS(A,N,N)> <MEAN> <SNAIVE> <COMBINATION>
Which I then forecast out 6 months
groups_avg%>%forecast(h=6,level=c(90,95))->groups_fc
Before generating my idea of what the output tsibble should be:
>firm_fc%>%hilo(level=c(90,95))->firm_hilo
> groups_hilo
# A tsibble: 120 x 7 [1M]
# Key: Group, .model [20]
Group .model Month Measure .mean `90%` `95%`
<chr> <chr> <mth> <dist> <dbl> <hilo> <hilo>
1 CapstoneLaw ets 2021 May N(12, 21) 11.6 [4.1332418, 19.04858]90 [ 2.704550, 20.47727]95
2 CapstoneLaw ets 2021 Jun N(12, 21) 11.6 [4.0438878, 19.13793]90 [ 2.598079, 20.58374]95
3 CapstoneLaw ets 2021 Jul N(12, 22) 11.6 [3.9555794, 19.22624]90 [ 2.492853, 20.68897]95
4 CapstoneLaw ets 2021 Aug N(12, 22) 11.6 [3.8682807, 19.31354]90 [ 2.388830, 20.79299]95
5 CapstoneLaw ets 2021 Sep N(12, 23) 11.6 [3.7819580, 19.39986]90 [ 2.285970, 20.89585]95
6 CapstoneLaw ets 2021 Oct N(12, 23) 11.6 [3.6965790, 19.48524]90 [ 2.184235, 20.99758]95
7 CapstoneLaw mean 2021 May N(8, 21) 7.97 [0.3744124, 15.56725]90 [-1.080860, 17.02253]95
8 CapstoneLaw mean 2021 Jun N(8, 21) 7.97 [0.3744124, 15.56725]90 [-1.080860, 17.02253]95
9 CapstoneLaw mean 2021 Jul N(8, 21) 7.97 [0.3744124, 15.56725]90 [-1.080860, 17.02253]95
10 CapstoneLaw mean 2021 Aug N(8, 21) 7.97 [0.3744124, 15.56725]90 [-1.080860, 17.02253]95
# ... with 110 more rows
As I've done with more simply structured forecasts, I tried to write these forecast results to a csv.
> write.csv(firm_hilo,dir)
Error: Can't convert <hilo> to <character>.
Run `rlang::last_error()` to see where the error occurred.
But I am quite lost on how to coerce the generated 90/95% confidence intervals into a format that I can export. Has anyone encountered this issue?
Please let me know if I should include any more information!

Related

R loop over nominal list and integers

I have a dataset where I have been able to loop over different test values with dpois. For simplicity's sake, I have used an average of 4 events per month and I wanted to know what is the likelihood of n or more events, given the average. Here is what I have managed to make work:
MonthlyAverage <- 4
cnt <- c(0:10)
for (i in cnt) {
CountProb <- ppois(cnt,MonthlyAverage,lower.tail=FALSE)
}
dfProb <- data.frame(cnt,CountProb)
I am interested in investigating this to figure out how many events I may expect each month given the mean of that month.
I would be looking to say:
For January, what is the probability of 0
For January, what is the probability of 1
For January, what is the probability of 2
etc...
For February, what is the probability of 0
For February, what is the probability of 1
For February, what is the probability of 2
etc.
To give something like (numbers here are just an example):
I thought about trying one loop to select the correct month and then remove the month column so I am just left with the single "Monthly Average" value and then performing the count loop, but that doesn't seem to work. I still get "Non-numeric argument to mathematical function". I feel like I'm close, but can anyone please point me in the right direction for the formatting?
a "tidy-style" solution:
library(tidyr)
library(dplyr)
## example data:
df <- data.frame(Month = c('Jan', 'Feb'),
MonthlyAverage = c(5, 2)
)
> df
Month MonthlyAverage
1 Jan 5
2 Feb 2
df |>
mutate(n = list(1:10)) |>
unnest_longer(n) |>
mutate(CountProb = ppois(n, MonthlyAverage,
lower.tail=FALSE
)
)
# A tibble: 20 x 4
Month MonthlyAverage n CountProb
<chr> <dbl> <int> <dbl>
1 Jan 5 1 0.960
2 Jan 5 2 0.875
3 Jan 5 3 0.735
4 Jan 5 4 0.560
5 Jan 5 5 0.384
6 Jan 5 6 0.238
## ...
How about something like this:
cnt <- 0:10
MonthlyAverage <- c(1.8, 1.56, 2.44, 1.86, 2.1, 2.3, 2, 2.78, 1.89, 1.86, 1.4, 1.71)
grid <- expand.grid(cnt =cnt, m_num = 1:12)
grid$MonthlyAverage <- MonthlyAverage[grid$m_num]
mnames <- c("Jan", "Feb", "Mar", "Apr", "May", "Jun", "Jul", "Aug", "Sep", "Oct", "Nov", "Dec")
grid$month <- mnames[grid$m_num]
grid$prob <- ppois(grid$cnt, grid$MonthlyAverage, lower.tail=FALSE)
grid[,c("month", "cnt", "prob")]
#> month cnt prob
#> 1 Jan 0 8.347011e-01
#> 2 Jan 1 5.371631e-01
#> 3 Jan 2 2.693789e-01
#> 4 Jan 3 1.087084e-01
#> 5 Jan 4 3.640666e-02
#> 6 Jan 5 1.037804e-02
#> 7 Jan 6 2.569450e-03
#> 8 Jan 7 5.615272e-04
#> 9 Jan 8 1.097446e-04
#> 10 Jan 9 1.938814e-05
#> 11 Jan 10 3.123964e-06
#> 12 Feb 0 7.898639e-01
#> 13 Feb 1 4.620517e-01
#> 14 Feb 2 2.063581e-01
#> 15 Feb 3 7.339743e-02
#> 16 Feb 4 2.154277e-02
#> 17 Feb 5 5.364120e-03
#> 18 Feb 6 1.157670e-03
#> 19 Feb 7 2.202330e-04
#> 20 Feb 8 3.743272e-05
#> 21 Feb 9 5.747339e-06
#> 22 Feb 10 8.044197e-07
#> 23 Mar 0 9.128391e-01
#> 24 Mar 1 7.001667e-01
#> 25 Mar 2 4.407062e-01
#> 26 Mar 3 2.296784e-01
#> 27 Mar 4 1.009515e-01
#> 28 Mar 5 3.813271e-02
#> 29 Mar 6 1.258642e-02
#> 30 Mar 7 3.681711e-03
#> 31 Mar 8 9.657751e-04
#> 32 Mar 9 2.294546e-04
#> 33 Mar 10 4.979244e-05
#> 34 Apr 0 8.443274e-01
#> 35 Apr 1 5.547763e-01
#> 36 Apr 2 2.854938e-01
#> 37 Apr 3 1.185386e-01
#> 38 Apr 4 4.090445e-02
#> 39 Apr 5 1.202455e-02
#> 40 Apr 6 3.071778e-03
#> 41 Apr 7 6.928993e-04
#> 42 Apr 8 1.398099e-04
#> 43 Apr 9 2.550478e-05
#> 44 Apr 10 4.244028e-06
#> 45 May 0 8.775436e-01
#> 46 May 1 6.203851e-01
#> 47 May 2 3.503686e-01
#> 48 May 3 1.613572e-01
#> 49 May 4 6.212612e-02
#> 50 May 5 2.044908e-02
#> 51 May 6 5.862118e-03
#> 52 May 7 1.486029e-03
#> 53 May 8 3.373058e-04
#> 54 May 9 6.927041e-05
#> 55 May 10 1.298297e-05
#> 56 Jun 0 8.997412e-01
#> 57 Jun 1 6.691458e-01
#> 58 Jun 2 4.039612e-01
#> 59 Jun 3 2.006529e-01
#> 60 Jun 4 8.375072e-02
#> 61 Jun 5 2.997569e-02
#> 62 Jun 6 9.361934e-03
#> 63 Jun 7 2.588841e-03
#> 64 Jun 8 6.415773e-04
#> 65 Jun 9 1.439431e-04
#> 66 Jun 10 2.948727e-05
#> 67 Jul 0 8.646647e-01
#> 68 Jul 1 5.939942e-01
#> 69 Jul 2 3.233236e-01
#> 70 Jul 3 1.428765e-01
#> 71 Jul 4 5.265302e-02
#> 72 Jul 5 1.656361e-02
#> 73 Jul 6 4.533806e-03
#> 74 Jul 7 1.096719e-03
#> 75 Jul 8 2.374473e-04
#> 76 Jul 9 4.649808e-05
#> 77 Jul 10 8.308224e-06
#> 78 Aug 0 9.379615e-01
#> 79 Aug 1 7.654944e-01
#> 80 Aug 2 5.257652e-01
#> 81 Aug 3 3.036162e-01
#> 82 Aug 4 1.492226e-01
#> 83 Aug 5 6.337975e-02
#> 84 Aug 6 2.360590e-02
#> 85 Aug 7 7.809999e-03
#> 86 Aug 8 2.320924e-03
#> 87 Aug 9 6.254093e-04
#> 88 Aug 10 1.540564e-04
#> 89 Sep 0 8.489282e-01
#> 90 Sep 1 5.634025e-01
#> 91 Sep 2 2.935807e-01
#> 92 Sep 3 1.235929e-01
#> 93 Sep 4 4.327373e-02
#> 94 Sep 5 1.291307e-02
#> 95 Sep 6 3.349459e-03
#> 96 Sep 7 7.672845e-04
#> 97 Sep 8 1.572459e-04
#> 98 Sep 9 2.913775e-05
#> 99 Sep 10 4.925312e-06
#> 100 Oct 0 8.443274e-01
#> 101 Oct 1 5.547763e-01
#> 102 Oct 2 2.854938e-01
#> 103 Oct 3 1.185386e-01
#> 104 Oct 4 4.090445e-02
#> 105 Oct 5 1.202455e-02
#> 106 Oct 6 3.071778e-03
#> 107 Oct 7 6.928993e-04
#> 108 Oct 8 1.398099e-04
#> 109 Oct 9 2.550478e-05
#> 110 Oct 10 4.244028e-06
#> 111 Nov 0 7.534030e-01
#> 112 Nov 1 4.081673e-01
#> 113 Nov 2 1.665023e-01
#> 114 Nov 3 5.372525e-02
#> 115 Nov 4 1.425330e-02
#> 116 Nov 5 3.201149e-03
#> 117 Nov 6 6.223149e-04
#> 118 Nov 7 1.065480e-04
#> 119 Nov 8 1.628881e-05
#> 120 Nov 9 2.248494e-06
#> 121 Nov 10 2.828495e-07
#> 122 Dec 0 8.191342e-01
#> 123 Dec 1 5.098537e-01
#> 124 Dec 2 2.454189e-01
#> 125 Dec 3 9.469102e-02
#> 126 Dec 4 3.025486e-02
#> 127 Dec 5 8.217692e-03
#> 128 Dec 6 1.937100e-03
#> 129 Dec 7 4.028407e-04
#> 130 Dec 8 7.489285e-05
#> 131 Dec 9 1.258275e-05
#> 132 Dec 10 1.927729e-06
Created on 2023-01-09 by the reprex package (v2.0.1)
If you have each month's mean, in base R you could easily use sapply to estimate the probability of obtaining values 0 to 10 using each month's mean value. Then you can simply combine it in a data frame:
# Data
df <- data.frame(month = month.name,
mean = c(1.8, 2.8, 1.7, 1.6, 1.8, 2,
2.3, 2.4, 2.1, 1.4, 1.9, 1.9))
probs <- sapply(1:12, function(x) ppois(0:10, df$mean[x], lower.tail = FALSE))
finaldata <- data.frame(month = rep(month.name, each = 11),
events = rep(0:10, times = 12),
prob = prob = as.vector(probs))
Output:
# month events prob
# 1 January 0 8.347011e-01
# 2 January 1 5.371631e-01
# 3 January 2 2.693789e-01
# 4 January 3 1.087084e-01
# 5 January 4 3.640666e-02
# 6 January 5 1.037804e-02
# 7 January 6 2.569450e-03
# 8 January 7 5.615272e-04
# 9 January 8 1.097446e-04
# 10 January 9 1.938814e-05
# 11 January 10 3.123964e-06
# 12 February 0 9.391899e-01
# 13 February 1 7.689218e-01
# 14 February 2 5.305463e-01
# 15 February 3 3.080626e-01
# ...
# 131 December 9 3.044317e-05
# 132 December 10 5.172695e-06

pivot_wider results in list column column instead of expected results

I'm just going to chalk this up to my ignorance, but sometimes the pivot_* functions drive me crazy.
I have a tibble:
# A tibble: 12 x 3
year term estimate
<dbl> <chr> <dbl>
1 2018 intercept -29.8
2 2018 daysuntilelection 8.27
3 2019 intercept -50.6
4 2019 daysuntilelection 7.40
5 2020 intercept -31.6
6 2020 daysuntilelection 6.55
7 2021 intercept -19.0
8 2021 daysuntilelection 4.60
9 2022 intercept -10.7
10 2022 daysuntilelection 6.41
11 2023 intercept 120
12 2023 daysuntilelection 0
that I would like to flip to:
# A tibble: 6 x 3
year intercept daysuntilelection
<dbl> <dbl> <dbl>
1 2018 -29.8 8.27
2 2019 -50.6 7.40
3 2020 -31.6 6.55
4 2021 -19.0 4.60
5 2022 -10.7 6.41
6 2023 120 0
Normally pivot_wider should be able to do this as x %>% pivot_wider(!year, names_from = "term", values_from = "estimate") but instead it returns a two-column tibble with lists and a bunch of warning.
# A tibble: 1 x 2
intercept daysuntilelection
<list> <list>
1 <dbl [6]> <dbl [6]>
Warning message:
Values from `estimate` are not uniquely identified; output will contain list-cols.
* Use `values_fn = list` to suppress this warning.
* Use `values_fn = {summary_fun}` to summarise duplicates.
* Use the following dplyr code to identify duplicates.
{data} %>%
dplyr::group_by(term) %>%
dplyr::summarise(n = dplyr::n(), .groups = "drop") %>%
dplyr::filter(n > 1L)
Where do I go wrong here? Help!
Next to the solutions offered in the comments, data.table's dcast is a very fast implementation to pivot your data. If the pivot_ functions drive you crazy, maybe this is a nice alternative for you:
x <- read.table(text = "
1 2018 intercept -29.8
2 2018 daysuntilelection 8.27
3 2019 intercept -50.6
4 2019 daysuntilelection 7.40
5 2020 intercept -31.6
6 2020 daysuntilelection 6.55
7 2021 intercept -19.0
8 2021 daysuntilelection 4.60
9 2022 intercept -10.7
10 2022 daysuntilelection 6.41
11 2023 intercept 120
12 2023 daysuntilelection 0")
names(x) <- c("id", "year", "term", "estimate")
library(data.table)
dcast(as.data.table(x), year ~ term)
#> Using 'estimate' as value column. Use 'value.var' to override
#> year daysuntilelection intercept
#> 1: 2018 8.27 -29.8
#> 2: 2019 7.40 -50.6
#> 3: 2020 6.55 -31.6
#> 4: 2021 4.60 -19.0
#> 5: 2022 6.41 -10.7
#> 6: 2023 0.00 120.0
DATA
df <- read.table(text = "
1 2018 intercept -29.8
2 2018 daysuntilelection 8.27
3 2019 intercept -50.6
4 2019 daysuntilelection 7.40
5 2020 intercept -31.6
6 2020 daysuntilelection 6.55
7 2021 intercept -19.0
8 2021 daysuntilelection 4.60
9 2022 intercept -10.7
10 2022 daysuntilelection 6.41
11 2023 intercept 120
12 2023 daysuntilelection 0")
CODE
library(tidyverse)
df %>%
pivot_wider(names_from = V3,values_from = V4 , values_fill = 0) %>%
group_by(V2) %>%
summarise_all(sum,na.rm=T)
OUTPUT
V2 V1 intercept daysuntilelection
<int> <int> <dbl> <dbl>
1 2018 3 -29.8 8.27
2 2019 7 -50.6 7.4
3 2020 11 -31.6 6.55
4 2021 15 -19 4.6
5 2022 19 -10.7 6.41
6 2023 23 120 0

code for creating plot of seasonal variability with monthly data?

Year Mean SD
<chr> <dbl> <dbl>
1 Jun 277. 230.
2 Jul 249. 113.
3 Aug 273. 129.
4 Sep 278. 124.
5 Oct 310. 118.
6 Nov 291. 107.
7 Dec 352. 90.4
8 Jan 355. 121.
9 Feb 517. 422.
10 Mar 366. 186.
11 Apr 355. 315.
12 May 239. 136.
how to show this plot with these data in R?
any helpful code?
library(forecast)
#> Registered S3 method overwritten by 'quantmod':
#> method from
#> as.zoo.data.frame zoo
decomp <- stl(gas, 5)
plot(decomp)
print(head(decomp$time.series))
#> seasonal trend remainder
#> Jan 1956 -330.9170 2015.744 24.173111
#> Feb 1956 -402.1667 2020.906 27.261128
#> Mar 1956 -238.4648 2026.067 6.397573
#> Apr 1956 -151.6784 2031.229 -1.550535
#> May 1956 170.7040 2036.306 -34.010462
#> Jun 1956 275.1713 2041.384 4.444837
Created on 2022-03-02 by the reprex package (v2.0.1)

How to forecast multiple time series in R

I have this dataset that contains multiple series (50 products). My dataset has 50 products (50 columns). each column has the daily sales of a product.
I want to forecast these product using ets. So I have created this code below and when I run it I get only one time series and some information that I do not understand. Thanks in advance :)
y<- read.csv("QAO2.csv", header=FALSE, fileEncoding = "latin1")
y <- ts(y[,-1],f=12,s=c(2007, 1))
ns <- ncol(y)
for(i in 1:ns)
fit.ets <- ets(y[,i])
print(fit.ets)
f.ets <- forecast(fit.ets,h=12)
print(f.ets)
plot(f.ets)
This is what the fable package is designed to do. Here is an example using 50 series of monthly data from 2007. Although you say you have daily data, the code you provide assumes monthly data (frequency 12).
library(fable)
library(dplyr)
library(tidyr)
library(ggplot2)
y <- ts(matrix(rnorm(175*50), ncol=50), frequency=12, start=c(2007,1)) %>%
as_tsibble() %>%
rename(Month = index, Sales=value)
y
#> # A tsibble: 8,750 x 3 [1M]
#> # Key: key [50]
#> Month key Sales
#> <mth> <chr> <dbl>
#> 1 2007 Jan Series 1 1.06
#> 2 2007 Feb Series 1 0.495
#> 3 2007 Mar Series 1 0.332
#> 4 2007 Apr Series 1 0.157
#> 5 2007 May Series 1 -0.120
#> 6 2007 Jun Series 1 -0.0846
#> 7 2007 Jul Series 1 -0.743
#> 8 2007 Aug Series 1 0.714
#> 9 2007 Sep Series 1 1.73
#> 10 2007 Oct Series 1 -0.212
#> # … with 8,740 more rows
fit.ets <- y %>% model(ETS(Sales))
fit.ets
#> # A mable: 50 x 2
#> # Key: key [50]
#> key `ETS(Sales)`
#> <chr> <model>
#> 1 Series 1 <ETS(A,N,N)>
#> 2 Series 10 <ETS(A,N,N)>
#> 3 Series 11 <ETS(A,N,N)>
#> 4 Series 12 <ETS(A,N,N)>
#> 5 Series 13 <ETS(A,N,N)>
#> 6 Series 14 <ETS(A,N,N)>
#> 7 Series 15 <ETS(A,N,N)>
#> 8 Series 16 <ETS(A,N,N)>
#> 9 Series 17 <ETS(A,N,N)>
#> 10 Series 18 <ETS(A,N,N)>
#> # … with 40 more rows
f.ets <- forecast(fit.ets, h=12)
f.ets
#> # A fable: 600 x 5 [1M]
#> # Key: key, .model [50]
#> key .model Month Sales .mean
#> <chr> <chr> <mth> <dist> <dbl>
#> 1 Series 1 ETS(Sales) 2021 Aug N(-0.028, 1.1) -0.0279
#> 2 Series 1 ETS(Sales) 2021 Sep N(-0.028, 1.1) -0.0279
#> 3 Series 1 ETS(Sales) 2021 Oct N(-0.028, 1.1) -0.0279
#> 4 Series 1 ETS(Sales) 2021 Nov N(-0.028, 1.1) -0.0279
#> 5 Series 1 ETS(Sales) 2021 Dec N(-0.028, 1.1) -0.0279
#> 6 Series 1 ETS(Sales) 2022 Jan N(-0.028, 1.1) -0.0279
#> 7 Series 1 ETS(Sales) 2022 Feb N(-0.028, 1.1) -0.0279
#> 8 Series 1 ETS(Sales) 2022 Mar N(-0.028, 1.1) -0.0279
#> 9 Series 1 ETS(Sales) 2022 Apr N(-0.028, 1.1) -0.0279
#> 10 Series 1 ETS(Sales) 2022 May N(-0.028, 1.1) -0.0279
#> # … with 590 more rows
f.ets %>%
filter(key == "Series 1") %>%
autoplot(y) +
labs(title = "Series 1")
Created on 2021-08-05 by the reprex package (v2.0.0)

fable from distribution to confidence interval

I manage to use fable for the forecast then get the result
could I have some guidance on how to change this distribution to 80% 95% confidence interval? thank you!
you can use the sample code here to get the distribution
result <–USAccDeaths %>% as_tsibble %>%
model(arima = ARIMA(log(value) ~ pdq(0,1,1) + PDQ(0,1,1)))%>%
forecast(h=12)
The hilo() function allows you to extract confidence intervals from a forecast distribution. It can be used on either the distribution vector, or the fable itself.
library(tidyverse)
library(fable)
result <- as_tsibble(USAccDeaths) %>%
model(arima = ARIMA(log(value) ~ pdq(0,1,1) + PDQ(0,1,1)))%>%
forecast(h=12)
result %>%
mutate(`80%` = hilo(value, 80))
#> # A fable: 12 x 5 [1M]
#> # Key: .model [1]
#> .model index value .mean `80%`
#> <chr> <mth> <dist> <dbl> <hilo>
#> 1 arima 1979 Jan t(N(9, 0.0014)) 8290. [ 7899.082, 8689.169]80
#> 2 arima 1979 Feb t(N(8.9, 0.0018)) 7453. [ 7055.860, 7859.100]80
#> 3 arima 1979 Mar t(N(9, 0.0022)) 8276. [ 7789.719, 8774.054]80
#> 4 arima 1979 Apr t(N(9.1, 0.0025)) 8584. [ 8036.304, 9144.752]80
#> 5 arima 1979 May t(N(9.2, 0.0029)) 9499. [ 8849.860, 10166.302]80
#> 6 arima 1979 Jun t(N(9.2, 0.0033)) 9900. [ 9180.375, 10639.833]80
#> 7 arima 1979 Jul t(N(9.3, 0.0037)) 10988. [10145.473, 11857.038]80
#> 8 arima 1979 Aug t(N(9.2, 0.0041)) 10132. [ 9315.840, 10974.140]80
#> 9 arima 1979 Sep t(N(9.1, 0.0045)) 9138. [ 8368.585, 9933.124]80
#> 10 arima 1979 Oct t(N(9.1, 0.0049)) 9391. [ 8567.874, 10243.615]80
#> 11 arima 1979 Nov t(N(9.1, 0.0052)) 8863. [ 8056.754, 9699.824]80
#> 12 arima 1979 Dec t(N(9.1, 0.0056)) 9356. [ 8474.732, 10271.739]80
result %>%
hilo(level = c(80, 95))
#> # A tsibble: 12 x 6 [1M]
#> # Key: .model [1]
#> .model index value .mean `80%`
#> <chr> <mth> <dist> <dbl> <hilo>
#> 1 arima 1979 Jan t(N(9, 0.0014)) 8290. [ 7899.082, 8689.169]80
#> 2 arima 1979 Feb t(N(8.9, 0.0018)) 7453. [ 7055.860, 7859.100]80
#> 3 arima 1979 Mar t(N(9, 0.0022)) 8276. [ 7789.719, 8774.054]80
#> 4 arima 1979 Apr t(N(9.1, 0.0025)) 8584. [ 8036.304, 9144.752]80
#> 5 arima 1979 May t(N(9.2, 0.0029)) 9499. [ 8849.860, 10166.302]80
#> 6 arima 1979 Jun t(N(9.2, 0.0033)) 9900. [ 9180.375, 10639.833]80
#> 7 arima 1979 Jul t(N(9.3, 0.0037)) 10988. [10145.473, 11857.038]80
#> 8 arima 1979 Aug t(N(9.2, 0.0041)) 10132. [ 9315.840, 10974.140]80
#> 9 arima 1979 Sep t(N(9.1, 0.0045)) 9138. [ 8368.585, 9933.124]80
#> 10 arima 1979 Oct t(N(9.1, 0.0049)) 9391. [ 8567.874, 10243.615]80
#> 11 arima 1979 Nov t(N(9.1, 0.0052)) 8863. [ 8056.754, 9699.824]80
#> 12 arima 1979 Dec t(N(9.1, 0.0056)) 9356. [ 8474.732, 10271.739]80
#> # … with 1 more variable: `95%` <hilo>
To extract the numerical values from a <hilo> object, you can use the unpack_hilo() function, or obtain each part using <hilo>$lower, <hilo>$upper and <hilo>$level.
result %>%
hilo(level = c(80, 95)) %>%
unpack_hilo("80%")
#> # A tsibble: 12 x 7 [1M]
#> # Key: .model [1]
#> .model index value .mean `80%_lower` `80%_upper`
#> <chr> <mth> <dist> <dbl> <dbl> <dbl>
#> 1 arima 1979 Jan t(N(9, 0.0014)) 8290. 7899. 8689.
#> 2 arima 1979 Feb t(N(8.9, 0.0018)) 7453. 7056. 7859.
#> 3 arima 1979 Mar t(N(9, 0.0022)) 8276. 7790. 8774.
#> 4 arima 1979 Apr t(N(9.1, 0.0025)) 8584. 8036. 9145.
#> 5 arima 1979 May t(N(9.2, 0.0029)) 9499. 8850. 10166.
#> 6 arima 1979 Jun t(N(9.2, 0.0033)) 9900. 9180. 10640.
#> 7 arima 1979 Jul t(N(9.3, 0.0037)) 10988. 10145. 11857.
#> 8 arima 1979 Aug t(N(9.2, 0.0041)) 10132. 9316. 10974.
#> 9 arima 1979 Sep t(N(9.1, 0.0045)) 9138. 8369. 9933.
#> 10 arima 1979 Oct t(N(9.1, 0.0049)) 9391. 8568. 10244.
#> 11 arima 1979 Nov t(N(9.1, 0.0052)) 8863. 8057. 9700.
#> 12 arima 1979 Dec t(N(9.1, 0.0056)) 9356. 8475. 10272.
#> # … with 1 more variable: `95%` <hilo>
Created on 2020-04-08 by the reprex package (v0.3.0)

Resources