Converting Data for Timeseries analysis using ets() - r

I have a csv file with the following data:
Year 1000 Barrels/Day
1/15/2000 239
2/15/2000 267
3/15/2000 162
4/15/2000 264
5/15/2000 170
6/15/2000 210
7/15/2000 264
8/15/2000 405
9/15/2000 352
10/15/2000 337
I ran the following code for it's conversion to timeseries format for processing.
library(xts)
library(forecast)
df<- read.csv("US-OIL.csv")
stocks <- xts(df[,-1], order.by=as.Date(df[,1], "%m/%d/%Y"))
ets(stocks)
But when I run the last line, I get the output with an ETS(A,N,N) model.
I am not sure why this is happening because, when I run ets() with a preloaded dataset elecequip in library(fpp) I get an output with ETS(M,Ad,M)
Not sure why this discrepancy. Please provide your comments in this matter.

You are letting ets automatically choose a model based on AIC, AICcm or BIC. The data is different for the elecquip dataset, so the model is also different.
See slide 24:
http://robjhyndman.com/talks/RevolutionR/6-ETS.pdf

Related

What is the appropriate frequency and start/end date in R for ts()?

I have the following dataset which I want to make a time series object of for auto.arima forecasting:
head(df)
total_score return
1539 121.77
1074 422.18
901 -229.79
843 96.30
1101 -55.25
961 -48.28
This data set contains of 13104 rows with each row representing sentiment score of tweets and BTC return on hourly basis, i.e. first row is 2021-01-01 00:00 and second row is 2021-01-01 01:00 and so on up until 2022-06-30 23:00. I have looked up how many hours fits in this range and that is 13103. How can I make my ts function such that I can use it for forecasting purposes in R auto.arima?
Moreover, I understand that auto.arima takes homoscedastic errors, whereas I need it to work for heteroscedastic errors. I also read that for this, I might use a GARCH model. However, if my auto.arima functions results in using a order of (2,0,0), does this mean that my GARCH model should be a (0,0,2)?
PS: I am still confused on why my data seems to be stationary, I was under the impression that crypto currencies are most likely NOT stationary, that is, the returns as well. But that is something for another time.

When plotting values over a large period of time, the dates populate out of order

I am trying to plot many years of data to view the trends that my data takes. When utilizing plot(dx) I get a date vs. value plot, but the date jumps around and doesn't seem to be in order. Below is the code I am utilizing, and a copy of the plot that is produced. Any help would be appreciated.
oz8<-(SRDAILYAVG_1004$Ozone.8.hr.Max.Value)
dd<-(SRDAILYAVG_1004$Date)
MDA8<-exp(oz8)
dx<-data.frame(date=dd,value=MDA8)
plot(dx)
A sample of the data stored in dx is provided as well.
241 8/29/2001 NaN
242 8/30/2001 NaN
243 8/31/2001 1.019182
244 9/1/2001 1.031486
245 9/2/2001 1.030455
246 9/3/2001 1.025315

How to Interpret "Levels" in Random Forest using R/Rattle

I am brand new at using R/Rattle and am having difficulty understanding how to interpret the last line of this code output. Here is the function call along with it's output:
> head(weatherRF$model$predicted, 10)
336 342 94 304 227 173 265 44 230 245
No No No No No No No No No No
Levels: No Yes
This code is implementing a weather data set in which we are trying to get predictions for "RainTomorrow". I understand that this function calls for the predictions for the first 10 observations of the data set. What I do NOT understand is what the last line ("Levels: No Yes") means in the output.
It's called a factor variable.
That is the list of permitted values of the factor, here the values No and Yes are permitted.

Rolling subset of data frame within for loop in R

Big picture explanation is I am trying to do a sliding window analysis on environmental data in R. I have PAR (photosynthetically active radiation) data for a select number of sequential dates (pre-determined based off other biological factors) for two years (2014 and 2015) with one value of PAR per day. See below the few first lines of the data frame (data frame name is "rollingpar").
par14 par15
1356.3242 1306.7725
NaN 1232.5637
1349.3519 505.4832
NaN 1350.4282
1344.9306 1344.6508
NaN 1277.9051
989.5620 NaN
I would like to create a loop (or any other way possible) to subset the data frame (both columns!) into two week windows (14 rows) from start to finish sliding from one window to the next by a week (7 rows). So the first window would include rows 1 to 14 and the second window would include rows 8 to 21 and so forth. After subsetting, the data needs to be flipped in structure (currently using the melt function in the reshape2 package) so that the values of the PAR data are in one column and the variable of par14 or par15 is in the other column. Then I need to get rid of the NaN data and finally perform a wilcox rank sum test on each window comparing PAR by the variable year (par14 or par15). Below is the code I wrote to prove the concept of what I wanted and for the first subsetted window it gives me exactly what I want.
library(reshape2)
par.sub=rollingpar[1:14, ]
par.sub=melt(par.sub)
par.sub=na.omit(par.sub)
par.sub$variable=as.factor(par.sub$variable)
wilcox.test(value~variable, par.sub)
#when melt flips a data frame the columns become value and variable...
#for this case value holds the PAR data and variable holds the year
#information
When I tried to write a for loop to iterate the process through the whole data frame (total rows = 139) I got errors every which way I ran it. Additionally, this loop doesn't even take into account the sliding by one week aspect. I figured if I could just figure out how to get windows and run analysis via a loop first then I could try to parse through the sliding part. Basically I realize that what I explained I wanted and what I wrote this for loop to do are slightly different. The code below is sliding row by row or on a one day basis. I would greatly appreciate if the solution encompassed the sliding by a week aspect. I am fairly new to R and do not have extensive experience with for loops so I feel like there is probably an easy fix to make this work.
wilcoxvalues=data.frame(p.values=numeric(0))
Upar=rollingpar$par14
for (i in 1:length(Upar)){
par.sub=rollingpar[[i]:[i]+13, ]
par.sub=melt(par.sub)
par.sub=na.omit(par.sub)
par.sub$variable=as.factor(par.sub$variable)
save.sub=wilcox.test(value~variable, par.sub)
for (j in 1:length(save.sub)){
wilcoxvalues$p.value[j]=save.sub$p.value
}
}
If anyone has a much better way to do this through a different package or function that I am unaware of I would love to be enlightened. I did try roll apply but ran into problems with finding a way to apply it to an entire data frame and not just one column. I have searched for assistance from the many other questions regarding subsetting, for loops, and rolling analysis, but can't quite seem to find exactly what I need. Any help would be appreciated to a frustrated grad student :) and if I did not provide enough information please let me know.
Consider an lapply using a sequence of every 7 values through 365 days of year (last day not included to avoid single day in last grouping), all to return a dataframe list of Wilcox test p-values with Week indicator. Then later row bind each list item into final, single dataframe:
library(reshape2)
slidingWindow <- seq(1,364,by=7)
slidingWindow
# [1] 1 8 15 22 29 36 43 50 57 64 71 78 85 92 99 106 113 120 127
# [20] 134 141 148 155 162 169 176 183 190 197 204 211 218 225 232 239 246 253 260
# [39] 267 274 281 288 295 302 309 316 323 330 337 344 351 358
# LIST OF WILCOX P VALUES DFs FOR EACH SLIDING WINDOW (TWO-WEEK PERIODS)
wilcoxvalues <- lapply(slidingWindow, function(i) {
par.sub=rollingpar[i:(i+13), ]
par.sub=melt(par.sub)
par.sub=na.omit(par.sub)
par.sub$variable=as.factor(par.sub$variable)
data.frame(week=paste0("Week: ", i%/%7+1, "-", i%/%7+2),
p.values=wilcox.test(value~variable, par.sub)$p.value)
})
# SINGLE DF OF ALL P-VALUES
wilcoxdf <- do.call(rbind, wilcoxvalues)

Object not found error

Firstly I need to explain that I've had MINIMAL training on R and have 0 knowledge of coding languages or programmes like R so please excuse me if I ask silly questions or don't understand something basic.
Also, I have tried to look at past topics/answers on this but I'm having a hard time relating the answers to my data so I apologise if this question has already been answered.
Basically I have a data set and I'm trying to find the mean of two variables (Peak flow before a walk in the cold, and peak flow after a walk in the cold) in this set. This is the entire code I've used so far:
drugs <- read.table(file = "C:\\Users\\Becky\\My Documents\\Asthmadata.txt", header = TRUE)
drugs
str(drugs)
mean.Asthmadata <- tapply (Asthmadata$trial1, list(Asthmadata$PEFR1), mean)
mean.Asthmadata
It works fine until the mean.Asthmadata. The data comes up in R just fine with the other codes but when I get to the mean and do the mean.Asthmadata [...] code, I keep getting the same error: "object 'mean.Asthmadata' not found"
My friend used the same code I did and it worked for him so I'm confused. Am I doing something wrong?
Thanks
EDIT:
#BenBolker
This is my data set
trial1 PEFR1 trial2 PEFR2
Before 310 After 299
Before 242 After 201
Before 340 After 232
Before 388 After 312
Before 294 After 221
Before 251 After 256
Before 391 After 327
Before 401 After 331
Before 287 After 231
And here's all the code I've used:
drugs <- read.table(file = "C:\\Users\\Becky\\My Documents\\Asthmadata.txt", header = TRUE)
drugs
str(drugs)
mean.drugs <- tapply (drugs$trial1, list(drugs$PEFR1), mean)
mean.drugs
The R version I have has two versions: i386 3.1.3, and x64 3.1.3 – I've tried both but neither seem to do what I want. I'm also using Windows 7 Home Premium 64bit. Hope I've included everything you need and I apologise if my formatting is off – I can't quite figure out how to format properly on here yet.
And the error I'm getting NOW is: “Error in split.default(X, group) : first argument must be a vector” when running the code Roland kindly provided. So I'm getting a different error each time I try it – it must be something I'm doing wrong.
Hope I've formatted that all correctly and included everything you need. Thanks :)
drugs <- read.table(header=TRUE,text="
trial1 PEFR1 trial2 PEFR2
Before 310 After 299
Before 242 After 201
Before 340 After 232
Before 388 After 312
Before 294 After 221
Before 251 After 256
Before 391 After 327
Before 401 After 331
Before 287 After 231")
In the current format you can calculate the mean before and after just by doing
mean(drugs$PEFR1)
and
mean(drugs$PEFR2)
What you may have had in mind was this shape:
drugs2 <- with(drugs,
data.frame(trial=c(as.character(trial1),
as.character(trial2)),
PEFR=c(PEFR1,PEFR2)))
I used with() for convenience -- it's a way to temporarily attach a data frame so you can refer directly to the variables therein.)
There's a bit of a pitfall in combining trial1 and trial2, as they get coerced to their numeric codes, which are all 1s in both cases, unless you use as.character() ...
you had the order of the variable to aggregate and the variable to group by backwards (you want to aggregate PEFR by trial, not the other way around)
mean.drugs <- with(drugs2,
tapply (PEFR, list(trial), mean))
## After Before
## 267.7778 322.6667

Resources