Error specifying character argument for GTS in R - r

I have 3 tiers of product which I'm creating a hierarchical forecast for using the gts function from the hts in R.
My tiers are:
PL1: A3
PL2: AT
PL3: ATA,ATB,ATD,ATH,ATI,ATJ
In reality I have many more, but I limited the structure to this subset as I'm just learning this package. Each PL3 has 40 time observations.
Following this tutorial from Hyndsight, I was able to get something working. However I don't think I'm specifying the character argument correctly.
myts=ts(matrix(data.agg$SalesUnits,ncol=6,nrow=40))
blnames <- unique(paste(data.agg$Group.2, # PL2
data.agg$Group.3, # PL3
data.agg$Group.4, # PL4
sep=""))
colnames(myts)=blnames
gy=gts(myts,characters=c(2,2,3))
fc=forecast(gy)
According to the documentation, specifying a numeric vector for characters implies a non-hierarchy?
Because none of these is hierarchical, we could specify characters = list(3, 1, 1), or as a simple numeric vector: characters = c(3, 1, 1). This implies its non-hierarchical structure and its characters segments
I can't figure out how I'm supposed to specify the correct character argument. When I try to use lists, the function fails. While my code works as written, I don't think it's correct because the output says there are only 2 levels:
Grouped Time Series
2 Levels
Number of groups at each level: 1 6
Total number of series: 7
Number of observations in each historical series: 40
Number of forecasts per series: 10

My mistake. I was using gts where I should have been using hts. That resolved my issue

Related

Timeseries average based on a defined time interval (bin)

Here is an example of my dataset. I want to calculate bin average based on time (i.e., ts) every 10 seconds. Could you please provide some hints so that I can carry on?
In my case, I want to average time (ts) and Var in every 10 seconds. For example, I will get an averaged value of Var and ts from 0 to 10 seconds; I will get another averaged value of Var and ts from 11 to 20 seconds, etc.
df = data.frame(ts = seq(1,100,by=0.5), Var = runif(199,1, 10))
Any functions or libraries in R can I use for this task?
There are many ways to calculate a binned average: with base aggregate,by, with the packages dplyr, data.table, probably with zoo and surely other timeseries packages...
library(dplyr)
df %>%
group_by(interval = round(df$ts/10)*10) %>%
summarize(Var_mean = mean(Var))
# A tibble: 11 x 2
interval Var_mean
<dbl> <dbl>
1 0 4.561653
2 10 6.544980
3 20 6.110336
4 30 4.288523
5 40 5.339249
6 50 6.811147
7 60 6.180795
8 70 4.920476
9 80 5.486937
10 90 5.284871
11 100 5.917074
That's the dplyr approach, see how it and data.table let you name the intermediate variables, which keeps code clean and legible.
Assuming df in the question, convert to a zoo object and then aggregate.
The second argument of aggregate.zoo is a vector the same length as the time vector giving the new times that each original time is to be mapped to. The third argument is applied to all time series values whose times have been mapped to the same value. This mapping could be done in various ways but here we have chosen to map times (0, 10] to 10, (10, 20] to 20, etc. by using 10 * ceiling(time(z) / 10).
In light of some of the other comments in the answers let me point out that in contrast to using a data frame there is significant simplification here, firstly because the data has been reduced to one dimension (vs. 2 in a data.frame), secondly because it is more conducive to the whole object approach whereas with data frames one needs to continually pick apart the object and work on those parts and thirdly because one now has all the facilities of zoo to manipulate the time series such as numerous NA removal schemes, rolling functions, overloaded arithmetic operators, n-way merges, simple access to classic, lattice and ggplot2 graphics, design which emphasizes consistency with base R making it easy to learn and extensive documentation including 5 vignettes plus help files with numerous examples and likely very few bugs given the 14 years of development and widespread use.
library(zoo)
z <- read.zoo(df)
z10 <- aggregate(z, 10 * ceiling(time(z) / 10), mean)
giving:
> z10
10 20 30 40 50 60 70 80
5.629926 6.571754 5.519487 5.641534 5.309415 5.793066 4.890348 5.509859
90 100
4.539044 5.480596
(Note that the data in the question is not reproducible because it used random numbers without set.seed so if you try to repeat the above you won't get an identical answer.)
Now we could plot it, say, using any of these:
plot(z10)
library(lattice)
xyplot(z10)
library(ggplot2)
autoplot(z10)
In general, I agree with #smci, the dplyr and data.table approach is the best here. Let me elaborate a bit further.
# the dplyr way
library(dplyr)
df %>%
group_by(interval = ceiling(seq_along(ts)/20)) %>%
summarize(variable_mean = mean(Var))
# the data.table way
library(data.table)
dt <- data.table(df)
dt[,list(Var_mean = mean(Var)),
by = list(interval = ceiling(seq_along(dt$ts)/20))]
I would not go to the traditional time series solutions like ts, zoo or xts here. Their methods are more suitable to handle regular frequencies and frequency like monthly or quarterly data. Apart from ts they can handle irregular frequencies and also high frequency data, but many methods such as the print methods don't work well or least do not give you an advantage over data.table or data.frame.
As long as you're just aggregating and grouping both data.table and dplyr are also likely faster in terms of performance. Guess data.table has the edge over dplyr in terms of speed, but you would have benchmark / profile that, e.g. using microbenchmark. So if you're not working with a classic R time series format anyway, there's no reason to go to these for aggregating.

How do I plot data by splitting it unto 5 second intervals?

I'm completely new to R, and I have been tasked with making a script to plot the protocols used by a simulated network of users into a histogram by a) identifying the protocols they use and b) splitting everything into a 5-second interval and generate a graph for each different protocol used.
Currently we have
data$bucket <- cut(as.numeric(format(data$DateTime, "%H%M")),
c(0,600, 2000, 2359),
labels=c("00:00-06:00", "06:00-20:00", "20:00-23:59")) #Split date into dates that are needed to be
to split the codes into 3-zones for another function.
What should the code be changed to for 5 second intervals?
Sorry if the question isn't very clear, and thank you
The histogram function hist() can aggregate and/or plot all by itself, so you really don't need cut().
Let's create 1,000 random time stamps across one hour:
set.seed(1)
foo <- as.POSIXct("2014-12-17 00:00:00")+runif(1000)*60*60
(Look at ?POSIXct on how R treats POSIX time objects. In particular, note that "+" assumes you want to add seconds, which is why I am multiplying by 60^2.)
Next, define the breakpoints in 5 second intervals:
breaks <- seq(as.POSIXct("2014-12-17 00:00:00"),
as.POSIXct("2014-12-17 01:00:00"),by="5 sec")
(This time, look at ?seq.POSIXt.)
Now we can plot the histogram. Note how we assign the output of hist() to an object bar:
bar <- hist(foo,breaks)
(If you don't want the plot, but only the bucket counts, use plot=FALSE.)
?hist tells you that hist() (invisibly) returns the counts per bucket. We can look at this by accessing the counts slot of bar:
bar$counts
[1] 1 2 0 1 0 1 1 2 3 3 0 ...

How do you plot a histogram of the terms that occur n or more times?

I have a list of words coming straight from file, one per line, that I import with read.csv which produces a data.frame. What I need to do is to compute and plot the numbers of occurences of each of these words. That, I can do easily, but the problem is that I have several hundreds of words, most of which occur just once or twice in the list, so I'm not interested in them.
EDIT https://gist.github.com/anonymous/404a321840936bf15dd2#file-wordlist-csv here is a sample wordlist that you can use to try. It isn't the same I used, I can't share that as it's actual data from actual experiments and I'm not allowed to share it. For all intents and purposes, this list is comparable.
A "simple"
df <- data.frame(table(words$word))
df[df$Freq > 2, ]
does the trick, I now have a list of the words that occur more than twice, as well as a hard headache as to why I have to go from a data.frame to an array and back to a data.frame just to do that, let alone the fact that I have to repeat the name of the data.frame in the actual selection string. Beats me completely.
The problem is that now the filtered data.frame is useless for charting. Suppose this is what I get after filtering
Var1 Freq
6 aspect 3
24 colour 7
41 differ 18
55 featur 7
58 function 19
81 look 4
82 make 3
85 mean 7
95 opposit 14
108 properti 3
109 purpos 6
112 relat 3
116 rhythm 4
118 shape 6
120 similar 5
123 sound 3
obviously if I just do a
plot(df[df$Freq > 2, ])
I get this
which obviously (obviously?) has all the original terms on the x axis, while the y axis only shows the filtered values. So the next logical step is to try and force R's hand
plot(x=df[df$Freq > 2, ]$Var1, y=df[df$Freq > 2, ]$Freq)
But clearly R knows best and already did that, because I get the exact same result. Using ggplot2 things get a little better
qplot(x=df[df$Freq > 2, ]$Var1, y=df[df$Freq > 2, ]$Freq)
(yay for consistency) but I'd like that to show an actual histograms, y'know, with bars, like the ones they teach in sixth grade, so if I ask that
qplot(x=df[df$Freq > 2, ]$Var1, y=df[df$Freq > 2, ]$Freq) + geom_bar()
I get
Error : Mapping a variable to y and also using stat="bin".
With stat="bin", it will attempt to set the y value to the count of cases in each group.
This can result in unexpected behavior and will not be allowed in a future version of ggplot2.
If you want y to represent counts of cases, use stat="bin" and don't map a variable to y.
If you want y to represent values in the data, use stat="identity".
See ?geom_bar for examples. (Defunct; last used in version 0.9.2)
so let us try the last suggestion, shall we?
qplot(df[df$Freq > 2, ]$Var1, stat='identity') + geom_bar()
fair enough, but there are my bars? So, back to basics
qplot(words$word) + geom_bar() # even if geom_bar() is probably unnecessary this time
gives me this
Am I crazy or [substitute a long list of ramblings and complaints about R]?
I generate some random data
set.seed(1)
df <- data.frame(Var1 = letters, Freq = sample(1: 8, 26, T))
Then I use dplyr::filter because it is very fast and easy.
library(ggplot2); library(dplyr)
qplot(data = filter(df, Freq > 2), Var1, Freq, geom= "bar", stat = "identity")
First of all, at least with plot(), there.s no reason to force a data.frame. plot() understands table objects. You can do
plot(table(words$words))
# or
plot(table(words$words), type="p")
# or
barplot(table(words$words))
We can use Filter to filter rows, unfortunately that drops the table class. But we can add that back on with as.table. This looks like
plot(as.table(Filter(function(x) x>2, table(words$words))), type="p")

R biglm with categorical variables

I have a large data set I working with in R using some of the big.___() packages. It's ~ 10 gigs (100mmR x 15C) and looks like this:
Price Var1 Var2
12.45 1 1
33.67 1 2
25.99 3 3
14.89 2 2
23.99 1 1
... ... ...
I am trying to predict price based on Var1 and Var2.
The problem I've come up with is that Var1 and Var2 are categorical / factor variables.
Var1 and Var2 each have 3 levels (1,2 and 3) but there are only 6 combinations in the data set
(1,1; 1,2; 1,3; 2,2; 2,3; 3,3)
To use factor variables in biglm() they must be present in each chunk of data that biglm uses (my understanding is that biglm breaks the data set into 'x' number of chunks and updates the regression parameters after analyzing each chunk in order to get around dealing with data sets that are larger than RAM).
I've tried to subset the data but my computer can't handle it or my code is wrong:
bm11 <- big.matrix(150000000, 3)
bm11 <- subset(x, x[,2] == 1 & x[,3] == 1)
The above gives me a bunch of these:
Error: cannot allocate vector of size 1.1 Gb
Does anyone have any suggestions for working around this issue?
I'm using R 64-bit on a windows 7 machine w/ 4 gigs of RAM.
You do not need all the data or all values present in each chunk, you just need all the levels accounted for. This means that you can have a chunk like this:
curchunk <- data.frame( Price=c(12.45, 33.67), Var1=factor( c(1,1), levels=1:3),
Var2 = factor( 1:2, levels=1:3 ) )
and it will work. Even though there is only 1 value in Var1 and 2 values in Var2, all three levels are present in both so it will do the correct thing.
Also biglm does not break the data into chunks for you, but expects you to give it manageble chunks to work with. Work through the examples to see this better. A common methodology with biglm is to read from a file or database, read in the first 'n' rows (where 'n' is a reasonble subset) and pass them to biglm (possibly after making sure all the factors have all the levels specified), then remove that chunk of data from memory and read in the next 'n' rows and pass that to update, continues with this until the end of the file removing the used chunks each time (so you have enough memory room for the next one).

R:More than 52 levels in a predicting factor, truncated for printout

Hi I'm a beginner in R programming language. I wrote one code for regression tree using rpart package. In my data some of my independent variables have more than 100 levels. After running the rpart function
I'm getting following warning message "More than 52 levels in a predicting factor, truncated for printout" & my tree is showing in very weird way. Say for example my tree is splitting by location which has around 70 distinct levels, but when the label is displaying in tree then it is showing "ZZZZZZZZZZZZZZZZ..........." where I don't have any location called "ZZZZZZZZ"
Please help me.
Thanks in advance.
Many of the functions in R have limits on the number of levels a factor-type variable can have (ie randomForest limits the number of levels of a factor to 32).
One way that I've seen it dealt with especially in data mining competitions is to:
1) Determine maximum number of levels allowed for a given function (call this X).
2) Use table() to determine the number of occurrences of each level of the factor and rank them from greatest to least.
3) For the top X - 1 levels of the factor leave them as is.
4) For the levels < X change them all to one factor to identify them as low-occurrence levels.
Here's an example that's a bit long but hopefully helps:
# Generate 1000 random numbers between 0 and 100.
vars1 <- data.frame(values1=(round(runif(1000) * 100,0)))
# Changes values to factor variable.
vars1$values1 <- factor(vars1$values1)
# Show top 6 rows of data frame.
head(vars1)
# Show the number of unique factor levels
length(unique(vars1$values1 ))
# Create table showing frequency of each levels occurrence.
table1 <- data.frame(table(vars1 ))
# Orders the table in descending order of frequency.
table1 <- table1[order(-table1$Freq),]
head(table1)
# Assuming we want to use the CART we choose the top 51
# levels to leave unchanged
# Get values of top 51 occuring levels
noChange <- table1$vars1[1:51]
# we use '-1000' as factor to avoid overlap w/ other levels (ie if '52' was
# actually one of the levels).
# ifelse() checks to see if the factor level is in the list of the top 51
# levels. If present it uses it as is, if not it changes it to '-1000'
vars1$newFactor <- (ifelse(vars1$values1 %in% noChange, vars1$values1, "-1000"))
# Show the number of levels of the new factor column.
length(unique(vars1$newFactor))
Finally, you may want to consider using truncated variables in rpart as the tree display gets very busy when there are a large number of variables or they have long names.

Resources