Regression - out-of-sample forecasting - r

I try to figure out how to deal with my forecasting problem and I am not sure if my understanding is right in this field, so it would be really nice if someone can help me. First of all, my goal is to forecast a time series with regression. Instead of using ARIMA model or other heuristic models I want to focus on machine learning techniques like regressions such as random forest regression, k-nearest-neighbour regression etc.. Here is an overview of the dataset:
Timestamp UsageCPU UsageMemory Indicator Delay
2014-01-03 21:50:00 3123 1231 1 123
2014-01-03 22:00:00 5123 2355 1 322
2014-01-03 22:10:00 3121 1233 2 321
2014-01-03 22:20:00 2111 1234 2 211
2014-01-03 22:30:00 1000 2222 2 0
2014-01-03 22:40:00 4754 1599 1 0
The timestamp is increased in steps of 10 minutes and I want to predict the independent variable UsageCPU with the dependent variables UsageMemory, Indicator etc.. At this point i will explain my general knowledge of the prediction part. So for the prediction it is necessary to separate the dataset into training, validation and test sets. For this my dataset that contains 2 whole weeks is separated in 60% training, 20% validation and 20% test. This means for training set I have the first 8 days included and for the validation and the test set I have each 3 days. After that I can train a model in SparkR (the settings are not important).
model <- spark.randomForest(train, UsageMemory ~ UsageMemory, Indicator, Delay,
type = "regression", maxDepth = 30, maxBins = 50, numTrees=50,
impurity="variance", featureSubsetStrategy="all")
So after this I can validate the results with the validation set and compute the RMSE to see the accuracy of the model and which point have to tuned in my model building part. If that is finished I can predict on the test dataset:
predictions <- predict(model, test)
So the prediction works fine, but this is only an in-sample forecast and can not be used to predict for example the next day. In my understanding the in-sample can only used to predict the data in the data set and not to predict future values that can happen tomorrow. So really want to predict for example the next day or only the next 10 minutes / 1 hour, which is only possible to success with the out-of-sample forecasting. I also tried something like this (rolling regression) on the predicted values from random forest, but in my case the rolling regression is only used for evaluating the performance of different regressors with respect to different parameters combinations. So this is in my understanding no out-sample forecasting.
t <- bind(prediction, RollingRegression3 = rollApply(prediction, fun=function(x) mean(UsageCPU), window=6, align='right'))
So in my understanding I need something (maybe lag values?), before the model building process starts. I also read a lot of different papers and books, but there is no clear way how to do it and what are the key points. There is only standing something like t+1, t+n, but right now I do not even know how to do it. Would be really nice if someone can help me, because I tried to figure this out since three month now, thank you.

Let’s see if I get your problem right. I suppose that, given a time window, e.g. 144 last observations (one day) of UsageCPU, UsageMemory, Indicator and Delay, you want to forecast the ‘n’ next observations of UsageCPU. One way you could do such a thing, using random forests, is assigning one model for each next observation you want to forecast. So, if you want to forecast the 10 next UsageCPU observations, you should train 10 random forest models.
Using the example I began with, you could split the data you have in chunks of 154 observations. In each, you will use the first 144 observations to forecast the last 10 values of UsageCPU. There are lots of ways in which you could use feature engineering to extract information from these first 144 observations to train your model with, e.g. mean for each variable, last observation of each variable, global mean for each variable. So, for each chunk you will get a vector containing a bunch of predictors and 10 target values.
Bind the vectors you got for each chunk and you’ll have a matrix where the first columns are the predictors and the last 10 columns are the targets. Train each random forest with the n predictors columns and 1 of the targets column. Now you can apply the models on the features you extract from any data chunk containing the 144 observations. The model trained for target column 1 will ‘forecast’ one observation ahead, the model trained for target column 2 will ‘forecast’ two observations ahead, the model trained for target column 3 will ‘forecast’ three observations ahead...

Related

How to create and analyze a time series with variable test frequency in R

Here is a short description of the problem I am trying to solve: I have test data for multiple variables (weight, thickness, absorption, etc.) that are taken at varying intervals over time - no set schedule, sometimes a test a day, sometimes days might go between tests. I want to detect trends in each of these and alert stake holders when any parameter is trending up/down more than a certain amount. I first did a linear model between each variable's raw data and test time (I converted the test time to days or weeks since a fixed date) and create a table with slopes for each variable - so the stake holders can view one table for all variables and quickly see if any of them is raising concern. The issue was that the data for most variables is very noisy. Someone suggested using time series functions, separating noise and seasonality from the trends, and studying the trend component for a cleaner analysis. I started to look into this and see a couple concerns/questions already:
Time series analysis seems to require specifying a frequency - how do you handle this if your test data is not taken at regular intervals
If one gets over the issue in #1 above, decomposes the data, and gets the trend separated out (ie. take out particularly the random variation/noise), how would you then get a slope metric from that? Namely, if I wanted to then fit a linear model to the trend component of the raw data (after decomposing), what would be the x (independent) variable? Is there a way to connect the trend component of the ts-decompose function with the original data's x-axis data (in this case the actual test date/times, say converted to weeks or days from a fixed date)?
Finally, is there a better way of accomplishing what I explained above? I am only looking for general trends over time - say over 3 months of data, not day to day trends.
Time series are generally used to see if previous observations of a variable have influence on future observations. You would model under the assumption that the previous observations are able to predict the future observations. That is the reason for that most (not all) time series models require evenly spaced instances of training data. If your data is not only very noisy, but also not collected on a regular basis, then you should seriously consider if time series is the appropriate choice of modelling.
Time series analysis seems to require specifying a frequency - how do you handle this if your test data is not taken at regular intervals.
What you can do, is creating an aggregate by increasing the time bucket (shift from daily data to a weekly average for instance) such that every unit of time has an instance of training data. Following your final comment, you could create the average of the observations of the last 3 months of data instead from the observations.
If one gets over the issue in #1 above, decomposes the data, and gets the trend separated out (ie. take out particularly the random variation/noise), how would you then get a slope metric from that? Namely, if I wanted to then fit a linear model to the trend component of the raw data (after decomposing), what would be the x (independent) variable?
In the simplest case of a linear model, the independent variable is the unit of time corresponding to the prediction you are trying to make. However this is not always regarded a time series model.
In the case of an autoregressive model, this would be the previous observation of what you are trying to predict, something similar to y(t) = x(t-1), for instance multiplied by a smoothing factor. I encourage you to read Forecasting: principles and practice which is an excellent book on the matter.
Is there a way to connect the trend component of the ts-decompose function with the original data's x-axis data (in this case the actual test date/times, say converted to weeks or days from a fixed date)?
The function decompose.ts returns a list which includes trend. Trend is a vector of the estimated trend components corresponding to it's respective time value.
Let's create an example time series with linear trend
df <- data.frame(
date = seq(from = as.Date("2021-01-01"), to = as.Date("2021-01-10"), by=1)
)
df$value <- jitter(seq(from = 1, to = nrow(df), by=1))
time_series <- ts(df$value, frequency = 5)
df$trend <- decompose(time_series)$trend
> df
date value trend
1 2021-01-01 0.9170296 NA
2 2021-01-02 1.8899565 NA
3 2021-01-03 3.0816892 2.992256
4 2021-01-04 4.0075589 4.042486
5 2021-01-05 5.0650478 5.046874
6 2021-01-06 6.1681775 6.051641
7 2021-01-07 6.9118942 7.074260
8 2021-01-08 8.1055282 8.041628
9 2021-01-09 9.1206522 NA
10 2021-01-10 9.9018900 NA
As you see, the trend component already is an estimate of the dependent variable at the corresponding time. In decompose the estimate of trend is based on a moving average.

Should I use Friedman test or Mixed Model for my data in R? Nested or not?

I have my Response variable which is Proportion of Range Exposed to extreme events for terrestrial mammal species in the future. More clearly, it is the Difference of Proportion of Range Exposed (DPRE) from historical period to future green gases emission scenarios (it is a measure of the level of increase/decrease of percentage of range exposed): it means that my response variable goes from -1 to 1 (where +1 implies that the range will experience a +100% increase in the proportion of exposure: from 0% in historical period, to 100% in the future scenario).
As said, I am analyzing these differences for all terrestrial mammals (5311 species, across different scenarios and for two time periods, near future (means of 2021-2040) and far future (means of 2081-2100).
So, my Explicative variables are:
3 Scenarios of green gas emissions (Representative Concentration Pathways: RCP2.6, RCP4.5 and RCP8.5);
Time Periods (Near Future and Far Future): NF and FF;
Species: 5311 individuals.
I am not so expert in statistics , so I'm not sure which of the two suggestions I recieved:
Friedman test with Species as blocks (but in which I should somehow do a nested model, with RCPs as groups, nested within TimePeriods; or a sort of two way Friedman, with RCP and TimePeriod as the two different factors).
Linear Mixed Models with RCP*TimePeriod as fixed effects, and (TimePeriod | Species ) as random effects.
I run t-test, and all distribution result to be not normal, this is why I was suggested to use Friendman instead of ANOVA; I run pairwise Wilcoxon Rank Sum test and in this case I found significative differences from NF and FF for all RCPs.
I have to say I run 3 Wilcoxon, one for every RCP, so maybe a third option would be to create 3 different models, one for every RCP, but this would also go away from the standard analysis of "repated measures" for Friedman test.
Last consideration: I have to run Another model, where the Response variable is the Difference of Proportion of Subrange Exposed. In this case, other Explicative variables are mantained, but in this case analysis is not global but takes in consideration the difference that could be present across 14 IUCN Biomes. So every analysis is made across RCPs, for NF and FF and for all Biomes. Should I create and run 14 (biomes) x 3 (RCPs) x 2 (Time Periods) = 84 models, in this case? OR a sort of double nested (Time Periods and Biomes) model?
If necessary I can provide the large dataframe.

How to determine the correct mixed effects structure in a binomial GLMM (lme4)?

Could someone help me to determine the correct random variable structure in my binomial GLMM in lme4?
I will first try to explain my data as best as I can. I have binomial data of seedlings that were eaten (1) or not eaten (0), together with data of vegetation cover. I try to figure out if there is a relationship between vegetation cover and the probability of a tree being eaten, as the other vegetation is a food source that could attract herbivores to a certain forest patch.
The data is collected in ~90 plots scattered over a National Park for 9 years now. Some were measured all years, some were measured only a few years (destroyed/newly added plots). The original datasets is split in 2 (deciduous vs coniferous), both containing ~55.000 entries. Per plot about 100 saplings were measured every time, so the two separate datasets probably contain about 50 trees per plot (though this will not always be the case, since the decid:conif ratio is not always equal). Each plot consists of 4 subplots.
I am aware that there might be spatial autocorrelation due to plot placement, but we will not correct for this, yet.
Every year the vegetation is surveyed in the same period. Vegetation cover is estimated at plot-level, individual trees (binary) are measured at a subplot-level.
All trees are measured, so the amount of responses per subplot will differ between subplots and years, as the forest naturally regenerates.
Unfortunately, I cannot share my original data, but I tried to create an example that captures the essentials:
#set seed for whole procedure
addTaskCallback(function(...) {set.seed(453);TRUE})
# Generate vector containing individual vegetation covers (in %)
cover1vec <- c(sample(0:100,10, replace = TRUE)) #the ',number' is amount of covers generated
# Create dataset
DT <- data.frame(
eaten = sample(c(0,1), 80, replace = TRUE),
plot = as.factor(rep(c(1:5), each = 16)),
subplot = as.factor(rep(c(1:4), each = 2)),
year = as.factor(rep(c(2012,2013), each = 8)),
cover1 = rep(cover1vec, each = 8)
)
Which will generate this dataset:
>DT
eaten plot subplot year cover1
1 0 1 1 2012 4
2 0 1 1 2012 4
3 1 1 2 2012 4
4 1 1 2 2012 4
5 0 1 3 2012 4
6 1 1 3 2012 4
7 0 1 4 2012 4
8 1 1 4 2012 4
9 1 1 1 2013 77
10 0 1 1 2013 77
11 0 1 2 2013 77
12 1 1 2 2013 77
13 1 1 3 2013 77
14 0 1 3 2013 77
15 1 1 4 2013 77
16 0 1 4 2013 77
17 0 2 1 2012 46
18 0 2 1 2012 46
19 0 2 2 2012 46
20 1 2 2 2012 46
....etc....
80 0 5 4 2013 82
Note1: to clarify again, in this example the number of responses is the same for every subplot:year combination, making the data balanced, which is not the case in the original dataset.
Note2: this example can not be run in a GLMM, as I get a singularity warning and all my random effect measurements are zero. Apparently my example is not appropriate to actually use (because using sample() caused the 0 and 1 to be in too even amounts to have large enough effects?).
As you can see from the example, cover data is the same for every plot:year combination.
Plots are measured multiple years (only 2012 and 2013 in the example), so there are repeated measures.
Additionally, a year effect is likely, given the fact that we have e.g. drier/wetter years.
First I thought about the following model structure:
library(lme4)
mod1 <- glmer(eaten ~ cover1 + (1 | year) + (1 | plot), data = DT, family = binomial)
summary(mod1)
Where (1 | year) should correct for differences between years and (1 | plot) should correct for the repeated measures.
But then I started thinking: all trees measured in plot 1, during year 2012 will be more similar to each other than when they are compared with (partially the same) trees from plot 1, during year 2013.
So, I doubt that this random model structure will correct for this within plot temporal effect.
So my best guess is to add another random variable, where this "interaction" is accounted for.
I know of two ways to possibly achieve this:
Method 1.
Adding the random variable " + (1 | year:plot)"
Method 2.
Adding the random variable " + (1 | year/plot)"
From what other people told me, I still do not know the difference between the two.
I saw that Method 2 added an extra random variable (year.1) compared to Method 1, but I do not know how to interpret that extra random variable.
As an example, I added the Random effects summary using Method 2 (zeros due to singularity issues with my example data):
Random effects:
Groups Name Variance Std.Dev.
plot.year (Intercept) 0 0
plot (Intercept) 0 0
year (Intercept) 0 0
year.1 (Intercept) 0 0
Number of obs: 80, groups: plot:year, 10; plot, 5; year, 2
Can someone explain me the actual difference between Method 1 and Method 2?
I am trying to understand what is happening, but cannot grasp it.
I already tried to get advice from a colleague and he mentioned that it is likely more appropriate to use cbind(success, failure) per plot:year combination.
Via this site I found that cbind is used in binomial models when Ntrails > 1, which I think is indeed the case given our sampling procedure.
I wonder, if cbind is already used on a plot:year combination, whether I need to add a plot:year random variable?
When using cbind, the example data would look like this:
>DT3
plot year cover1 Eaten_suc Eaten_fail
8 1 2012 4 4 4
16 1 2013 77 4 4
24 2 2012 46 2 6
32 2 2013 26 6 2
40 3 2012 91 2 6
48 3 2013 40 3 5
56 4 2012 61 5 3
64 4 2013 19 2 6
72 5 2012 19 5 3
80 5 2013 82 2 6
What would be the correct random model structure and why?
I was thinking about:
Possibility A
mod4 <- glmer(cbind(Eaten_suc, Eaten_fail) ~ cover1 + (1 | year) + (1 | plot),
data = DT3, family = binomial)
Possibility B
mod5 <- glmer(cbind(Eaten_suc, Eaten_fail) ~ cover1 + (1 | year) + (1 | plot) + (1 | year:plot),
data = DT3, family = binomial)
But doesn't cbind(success, failure) already correct for the year:plot dependence?
Possibility C
mod6 <- glmer(cbind(Eaten_suc, Eaten_fail) ~ cover1 + (1 | year) + (1 | plot) + (1 | year/plot),
data = DT3, family = binomial)
As I do not yet understand the difference between year:plot and year/plot
Thus: Is it indeed more appropriate to use the cbind-method than the raw binary data? And what random model structure would be necessary to prevent pseudoreplication and other dependencies?
Thank you in advance for your time and input!
EDIT 7/12/20: I added some extra information about the original data
You are asking quite a few questions in your question. I'll try to cover them all, but I do suggest reading the documentation and vignette from lme4 and the glmmFAQ page for more information. Also I'd highly recommend searching for these topics on google scholar, as they are fairly well covered.
I'll start somewhere simple
Note 2 (why is my model singular?)
Your model is highly singular, because the way you are simulating your data does not indicate any dependency between the data itself. If you wanted to simulate a binomial model you would use g(eta) = X %*% beta to simulate your linear predictor and thus the probability for success. One can then use this probability for simulating the your binary outcome. This would thus be a 2 step process, first using some known X or randomly simulated X given some prior distribution of our choosing. In the second step we would then use rbinom to simulate binary outcome while keeping it dependent on our predictor X.
In your example you are simulating independent X and a y where the probability is independent of X as well. Thus, when we look at the outcome y the probability of success is equal to p=c for all subgroup for some constant c.
Can someone explain me the actual difference between Method 1 and Method 2? ((1| year:plot) vs (1|year/plot))
This is explained in the package vignette fitting linear mixed effects models with lme4 in the table on page 7.
(1|year/plot) indicates that we have 2 mixed intercept effects, year and plot and plot is nested within year.
(1|year:plot) indicates a single mixed intercept effect, plot nested within year. Eg. we do not include the main effect of year. It would be somewhat similar to having a model without intercept (although less drastic, and interpretation is not destroyed).
It is more common to see the first rather than the second, but we could write the first as a function of the second (1|year) + (1|year:plot).
Thus: Is it indeed more appropriate to use the cbind-method than the raw binary data?
cbind in a formula is used for binomial data (or multivariate analysis), while for binary data we use the raw vector or 0/1 indicating success/failure, eg. aggregate binary data (similar to how we'd use glm). If you are uninterested in the random/fixed effect of subplot, you might be able to aggregate your data across plots, and then it would likely make sense. Otherwise stay with you 0/1 outcome vector indicating either success or failures.
What would be the correct random model structure and why?
This is a topic that is extremely hard to give a definitive answer to, and one that is still actively researched. Depending on your statistical paradigm opinions differ greatly.
Method 1: The classic approach
Classic mixed modelling is based upon knowledge of the data you are working with. In general there are several "rules of thumb" for choosing these parameters. I've gone through a few in my answer here. In general if you are "not interested" in the systematic effect and it can be thought of as a random sample of some population, then it could be a random effect. If it is the population, eg. samples do not change if the process is repeated, then it likely shouldn't.
This approach often yields "decent" choices for those who are new to mixed effect models, but is highly criticized by authors who tend towards methods similar to those we'd use in non-mixed models (eg. visualizing to base our choice and testing for significance).
Method 2: Using visualization
If you are able to split your data into independent subgroups and keeping the fixed effect structure a reasonable approach for checking potential random effects is the estimate marginal models (eg. using glm) across these subgroups and seeing if the fixed effects are "normally distributed" between these observations. The function lmList (in lme4) is designed for this specific approach. In linear models we would indeed expect these to be normally distributed, and thus we can get an indication whether a specific grouping "might" be a valid random effect structure. I believe the same is approximately true in the case of generalized linear models, but I lack references. I know that Ben Bolker have advocated for this approach in a prior article of his (the first reference below) that I used during my thesis. However this is only a valid approach for strictly separable data, and the implementation is not robust in the case where factor levels are not shared across all groups.
So in short: If you have the right data, this approach is simple, fast and seemingly highly reliable.
Method 3: Fitting maximal/minimal models and decreasing/expanding model based on AIC or AICc (or p-value tests or alternative metrics)
Finally an alternative to use a "step-wise"-like procedure. There are advocates of both starting with maximal and minimal models (I'm certain at least one of my references below talk about problems with both, otherwise check glmmFAQ) and then testing your random effects for their validity. Just like classic regression this is somewhat of a double-edged sword. The reason is both extremely simple to understand and amazingly complex to comprehend.
For this method to be successful you'd have to perform cross-validation or out-of-sample validation to avoid selection bias just like standard models, but unlike standard models sampling becomes complicated because:
The fixed effects are conditional on the random structure.
You will need your training and testing samples to be independent
As this is dependent on your random structure, and this is chosen in a step-wise approach it is hard to avoid information leakage in some of your models.
The only certain way to avoid problems here is to define the space
that you will be testing and selecting samples based on the most
restrictive model definition.
Next we also have problems with choice of metrics for evaluation. If one is interested in the random effects it makes sense to use AICc (AIC estimate of the conditional model) while for fixed effects it might make more sense to optimize AIC (AIC estimate of the marginal model). I'd suggest checking references to AIC and AICc on glmmFAQ, and be wary since the large-sample results for these may be uncertain outside a very reestrictive set of mixed models (namely "enough independent samples over random effects").
Another approach here is to use p-values instead of some metric for the procedure. But one should likely be even more wary of test on random effects. Even using a Bayesian approach or bootstrapping with incredibly high number of resamples sometimes these are just not very good. Again we need "enough independent samples over random effects" to ensure the accuracy.
The DHARMA provides some very interesting testing methods for mixed effects that might be better suited. While I was working in the area the author was still (seemingly) developing an article documenting the validity of their chosen method. Even if one does not use it for initial selection I can only recommend checking it out and deciding upon whether one believes in their methods. It is by far the most simple approach for a visual test with simple interpretation (eg. almost no prior knowledge is needed to interpret the plots).
A final note on this method would thus be: It is indeed an approach, but one I would personally not recommend. It requires either extreme care or the author accepting ignorance of model assumptions.
Conclusion
Mixed effect parameter selection is something that is difficult. My experience tells me that mostly a combination of method 1 and 2 are used, while method 3 seems to be used mostly by newer authors and these tend to ignore either out-of-sample error (measure model metrics based on the data used for training), ignore independence of samples problems when fitting random effects or restrict themselves to only using this method for testing fixed effect parameters. All 3 do however have some validity. I myself tend to be in the first group, and base my decision upon my "experience" within the field, rule-of-thumbs and the restrictions of my data.
Your specific problem.
Given your specific problem I would assume a mixed effect structure of (1|year/plot/subplot) would be the correct structure. If you add autoregressive (time-spatial) effects likely year disappears. The reason for this structure is that in geo-analysis and analysis of land plots the classic approach is to include an effect for each plot. If each plot can then further be indexed into subplot it is natural to think of "subplot" to be nested in "plot". Assuming you do not model autoregressive effects I would think of time as random for reasons that you already stated. Some years we'll have more dry and hotter weather than others. As the plots measured will have to be present in a given year, these would be nested in year.
This is what I'd call the maximal model and it might not be feasible depending on your amount of data. In this case I would try using (1|time) + (1|plot/subplot). If both are feasible I would compare these models, either using bootstrapping methods or approximate LRT tests.
Note: It seems not unlikely that (1|time/plot/subplot) would result in "individual level effects". Eg 1 random effect per row in your data. For reasons that I have long since forgotten (but once read) it is not plausible to have individual (also called subject-level) effects in binary mixed models. In this case It might also make sense to use the alternative approach or test whether your model assumptions are kept when withholding subplot from your random effects.
Below I've added some useful references, some of which are directly relevant to the question. In addition check out the glmmFAQ site by Ben Bolker and more.
References
Bolker, B. et al. (2009). „Generalized linear mixed models: a practical guide for ecology and evolution“. In: Trends in ecology & evolution 24.3, p. 127–135.
Bolker, B. et al. (2011). „GLMMs in action: gene-by-environment interaction in total fruit production of wild populations of Arabidopsis thaliana“. In: Revised version, part 1 1, p. 127–135.
Eager, C. og J. Roy (2017). „Mixed effects models are sometimes terrible“. In: arXiv preprint arXiv:1701.04858. url: https://arxiv.org/abs/1701.04858 (last seen 19.09.2019).
Feng, Cindy et al. (2017). „Randomized quantile residuals: an omnibus model diagnostic tool with unified reference distribution“. In: arXiv preprint arXiv:1708.08527. (last seen 19.09.2019).
Gelman, A. og Jennifer Hill (2007). Data Analysis Using Regression and Multilevel/Hierarchical Models. Cambridge University Press.
Hartig, F. (2019). DHARMa: Residual Diagnostics for Hierarchical (Multi-Level / Mixed) Regression Models. R package version 0.2.4. url: http://florianhartig.github.io/DHARMa/ (last seen 19.09.2019).
Lee, Y. og J. A. Nelder (2004). „Conditional and Marginal Models: Another View“. In: Statistical Science 19.2, p. 219–238.
doi: 10.1214/088342304000000305. url: https://doi.org/10.1214/088342304000000305
Lin, D. Y. et al. (2002). „Model-checking techniques based on cumulative residuals“. In: Biometrics 58.1, p. 1–12. (last seen 19.09.2019).
Lin, X. (1997). „Variance Component Testing in Generalised Linear Models with Random Effects“. In: Biometrika 84.2, p. 309–326. issn: 00063444. url: http://www.jstor.org/stable/2337459
(last seen 19.09.2019).
Stiratelli, R. et al. (1984). „Random-effects models for serial observations with binary response“. In:
Biometrics, p. 961–971.

How to compare temperature data over a period of time

My aim is to evaluate the effect of a treatment (on microclimate data) applied to a canopy compared to a control. Therefore I put three data logger in the canopy at 5 sites and each variant ("treatment applied" vs. "control"). Data is averaged every 5 minutes over a period of 217 days. The logged data looks like this:
Timepoint,Time,Celsius(°C),Humidity(%rh),dew point(°C)
1,27/03/2019 17:02:39,23.5,37.5,8.2
2,27/03/2019 17:07:39,23.5,36.5,7.8
3,27/03/2019 17:12:39,23.5,36.5,7.8
4,27/03/2019 17:17:39,24.0,37.5,8.6
5,27/03/2019 17:22:39,23.5,36.0,7.6
6,27/03/2019 17:27:39,23.0,37.0,7.5
7,27/03/2019 17:32:39,22.5,34.5,6.1
8,27/03/2019 17:37:39,22.5,34.5,6.1
Records are sumamrized daily to obtain mean/max/min temperature for each of the 217 days. Regardless of the site I want to determine the effect of the treatment applied and to expose the differences over time.
I was told that Time Series Analysis doesn't work here. I tried to apply linear regression (inspired from this paper: https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0234436) on the data, but since the control does not affect the treatment I discarded this approach.
So my question is: which method would be the proper way to analyse this microclimatic data in R?
You can try running linear regression with Time as a function of humidity and Celsius for the control and the treatment separately, and then compare the slopes of both models for each site. Naturally if you get a higher slope on your treatment than on your control, this indicates a responsive result to the treatment - the higher the delta between the slopes, the better the response to treatment is.
The model would go something like this(for a single site):
lm(Time~Celsius+Humidity, data = ControlData)
lm(Time~Celsius+Humidity, data = TreatmentData)
Then you can start playing with the coefficients and derive results from the differences, and the general slope of the regression line for each site. And after that, you can even combine the results by averaging the coefficients of the 5 control regression and compare them to the average of 5 treatment regressions (since the model is linear this should be statistically valid).

How to add level2 predictors in multilevel regression (package nlme)

I have a question concerning multi level regression models in R, specifically how to add predictors for my level 2 "measure".
Please consider the following example (this is not a real dataset, so the values might not make much sense in reality):
date id count bmi poll
2012-08-05 1 3 20.5 1500
2012-08-06 1 2 20.5 1400
2012-08-05 2 0 23 1500
2012-08-06 2 3 23 1400
The data contains
different persons ("id"...so it's two persons)
the body mass index of each person ("bmi", so it doesn't vary within an id)
the number of heart problems each person has on a specific day ("count). So person 1 had three problems on August the 5th, whereas person 2 had no difficulties/problems on that day
the amount of pollutants (like Ozon or sulfit dioxide) which have been measured on that given day
My general research question is, if the amount of pollutants effects the numer of heart problems in the population.
In a first step, this could be a simple linear regression:
lm(count ~ poll)
However, my data for each day is so to say clustered within persons. I have two measures from person 1 and two measures from person 2.
So my basic idea was to set up a multilevel model with persons (id) as my level 2 variable.
I used the nlme package for this analysis:
lme(fixed=count ~ poll, random = ~poll|id, ...)
No problems so far.
However, the true influence on level 2 might not only come from the fact that I have different persons. Rather it would be much more likely that the effect WITHIN a person might come from his or her bmi (and many other person related variables, like age, amount of smoking and so on).
To make a longstory short:
How can I specify such level two predictors in the lme function?
Or in other words: How can I setup a model, where the relationship between heart problems and pollution is different/clustered/moderated by the body mass index of a person (and as I said maybe additionally by this person's amount of smoking or age)
Unfortunately, I don't have a clue, how to tell R, what I want. I know oif other software (one of them called HLM), which is capable of doing waht I want, but I'm quite sure that R can this as well...
So, many thanks for any help!
deschen
Short answer: you do not have to, as long as you correctly specify random effects. The lme function automatically detects which variables are level 1 or 2. Consider this example using Oxboys where each subject was measured 9 times. For the time being, let me use lmer in the lme4 package.
library(nlme)
library(dplyr)
library(lme4)
library(lmerTest)
Oxboys %>% #1
filter(as.numeric(Subject)<25) %>% #2
mutate(Group=rep(LETTERS[1:3], each=72)) %>% #3
lmer(height ~ Occasion*Group + (1|Subject), data=.) %>% #4
anova() #5
Here I am picking 24 subjects (#2) and arranging them into 3 groups (#3) to make this data balanced. Now the design of this study is a split-plot design with a repeated-measures factor (Occasion) with q=9 levels and a between-subject factor (Group) with p=3 levels. Each group has n=8 subjects. Occasion is a level-1 variable while Group is level 2.
In #4, I did not specify which variable is level 1 or 2, but lmer gives you correct output. How do I know it is correct? Let us check the multi-level model's degrees of freedom for the fixed effects. If your data is balanced, the Kenward–Roger approximation used in the lmerTest will give you exact dfs and F/t-ratios according to this article. That is, in this example dfs for the test of Group, Occasion, and their interaction should be p-1=2, q-1=8, and (p-1)*(q-1)=16, respectively. The df for the Subject error term is (n-1)p = 21 and the df for the Subject:Occasion error term is p(n-1)(q-1)=168. In fact, these are the "exact" values we get from the anova output (#5).
I do not know what algorithm lme uses for approximating dfs, but lme does give you the same dfs. So I am assuming that it is accurate.

Resources