Laplace smoothing in R - r

I've been trying for a while to use Laplace smoothing in different columns but honestly, I haven't come to any working method, and theres not much content about this subject on the internet.
I'm currently working on a database with 43 variables and over 27k observations. the DF (time_data) looks like this:
time_data
T01 T02 T03 T_TOT
1 1 4 3 8
2 5 2 0 7
3 3 1 10 14
T_TOT = T01 + T02 + T03
As the name implies, each value correspond to an amount of seconds. For the sake of the example, let's say the max value T_TOT can be is 15 (seconds).
Because of the values are presented as discrete data, I was planning on decomposing the Laplace Smoothing into differente columns and calculate the new values at the end. The problem is that, in reality, I have 42 variables, and to do this i would end up with way to many variables.
is there any way in r to calculate the Laplace Smoothing?
if not, how could I create a loop to generate the new column with the Laplace values?
Expected Outcome:
T01 T02 T03 T_TOT L_T01 L_T02 Lt03
1 1 4 3 8 ... ... ...
2 5 2 0 7 ... ... ...
3 3 1 10 14 ... ... ...
brief explanation of Laplace smoothing: https://programmerclick.com/article/9417297724/

Related

Nested logit model using panel data in R

I am new to R and I would love it if you can help me with this because I am having serious difficulties.
I have unbalanced panel data that shows monthly companies' performance compared to the rest of the market in terms of $$ (eg. this month company 1 has made $1000 more than the average of the market). Each of these companies had decided on a strategy when they entered the market (1 through 8). These strategies are nested into two different groups (a,b) so that strategies 1,2, and 3 are part of the group a, while strategies 4 through 8 are part of group b. I would need a rank of the best strategies from best to worst.
I have discretized my DV so that now it only shows whether that month company 1 performed higher or lower than the market. However, I am not sure it is the right way because I would then lose how much better or worse each month companies performed compared to the market.
My data looks like this:
ID Main Strategy YearMonth DiffPerformance Control1 Control 2 DiffPerformanceHL
1 a 2 201706 9.037 2 57 H
1 a 2 201707 4.371 2 57 H
1 a 2 201708 1.633 2 57 H
1 a 2 201709 -3.521 2 59 L
1 a 2 201710 13.096 2 59 H
1 a 2 201711 5.070 2 60 H
1 a 2 201712 4.25 2 60 H
2 b 5 201904 6.78 4 171 H
2 b 5 201905 -15.26 4 169 L
2 b 5 201906 7.985 4 169 H
Where ID is the company, Main is the group (a or b) Strategies are 1 through 8 and nested as previously stated, YearMonth represents the specific month, DifferencePerformance is the DV as a continuous variable, Control 1 is static over time and is a categorical variable (1 through 6), Control 2 is a control count variable that changes over time, and DiffPerformance HL is the discretized DV.
Can you please help me figuring out how to create a nested logit model in R? I would be super appreciative
Thanks

sandwich + mlogit: `Error in ef/X : non-conformable arrays` when using `vcovHC()` to compute robust/clustered standard errors

I am trying to compute robust/cluster standard errors after using mlogit() to fit a Multinomial Logit (MNL) in a Discrete Choice problem. Unfortunately, I suspect I am having problems with it because I am using data in long format (this is a must in my case), and getting the error #Error in ef/X : non-conformable arrays after sandwich::vcovHC( , "HC0").
The Data
For illustration, please gently consider the following data. It represents data from 5 individuals (id_ind ) that choose among 3 alternatives (altern). Each of the five individuals chose three times; hence we have 15 choice situations (id_choice). Each alternative is represented by two generic attributes (x1 and x2), and the choices are registered in y (1 if selected, 0 otherwise).
df <- read.table(header = TRUE, text = "
id_ind id_choice altern x1 x2 y
1 1 1 1 1.586788801 0.11887832 1
2 1 1 2 -0.937965347 1.15742493 0
3 1 1 3 -0.511504401 -1.90667519 0
4 1 2 1 1.079365680 -0.37267925 0
5 1 2 2 -0.009203032 1.65150370 1
6 1 2 3 0.870474033 -0.82558651 0
7 1 3 1 -0.638604013 -0.09459502 0
8 1 3 2 -0.071679538 1.56879334 0
9 1 3 3 0.398263302 1.45735788 1
10 2 4 1 0.291413453 -0.09107974 0
11 2 4 2 1.632831160 0.92925495 0
12 2 4 3 -1.193272276 0.77092623 1
13 2 5 1 1.967624379 -0.16373709 1
14 2 5 2 -0.479859282 -0.67042130 0
15 2 5 3 1.109780885 0.60348187 0
16 2 6 1 -0.025834772 -0.44004183 0
17 2 6 2 -1.255129594 1.10928280 0
18 2 6 3 1.309493274 1.84247199 1
19 3 7 1 1.593558740 -0.08952151 0
20 3 7 2 1.778701074 1.44483791 1
21 3 7 3 0.643191170 -0.24761157 0
22 3 8 1 1.738820924 -0.96793288 0
23 3 8 2 -1.151429915 -0.08581901 0
24 3 8 3 0.606695064 1.06524268 1
25 3 9 1 0.673866953 -0.26136206 0
26 3 9 2 1.176959443 0.85005871 1
27 3 9 3 -1.568225496 -0.40002252 0
28 4 10 1 0.516456176 -1.02081089 1
29 4 10 2 -1.752854918 -1.71728381 0
30 4 10 3 -1.176101700 -1.60213536 0
31 4 11 1 -1.497779616 -1.66301234 0
32 4 11 2 -0.931117325 1.50128532 1
33 4 11 3 -0.455543630 -0.64370825 0
34 4 12 1 0.894843784 -0.69859139 0
35 4 12 2 -0.354902281 1.02834859 0
36 4 12 3 1.283785176 -1.18923098 1
37 5 13 1 -1.293772990 -0.73491317 0
38 5 13 2 0.748091387 0.07453705 1
39 5 13 3 -0.463585127 0.64802031 0
40 5 14 1 -1.946438667 1.35776140 0
41 5 14 2 -0.470448172 -0.61326604 1
42 5 14 3 1.478763383 -0.66490028 0
43 5 15 1 0.588240775 0.84448489 1
44 5 15 2 1.131731049 -1.51323232 0
45 5 15 3 0.212145247 -1.01804594 0
")
The problem
Consequently, we can fit an MNL using mlogit() and extract their robust variance-covariance as follows:
library(mlogit)
library(sandwich)
mo <- mlogit(formula = y ~ x1 + x2|0 ,
method ="nr",
data = df,
idx = c("id_choice", "altern"))
sandwich::vcovHC(mo, "HC0")
#Error in ef/X : non-conformable arrays
As we can see there is an error produced by sandwich::vcovHC, which says that ef/X is non-conformable. Where X <- model.matrix(x) and ef <- estfun(x, ...). After looking through the source code on the mirror on GitHub I spot the problem which comes from the fact that, given that the data is in long format, ef has dimensions 15 x 2 and X has 45 x 2.
My workaround
Given that the show must continue, I am computing the robust and cluster standard errors manually using some functions that I borrow from sandwich and I adjusted to accommodate the Stata's output.
> Robust Standard Errors
These lines are inspired on the sandwich::meat() function.
psi<- estfun(mo)
k <- NCOL(psi)
n <- NROW(psi)
rval <- (n/(n-1))* crossprod(as.matrix(psi))
vcov(mo) %*% rval %*% vcov(mo)
# x1 x2
# x1 0.23050261 0.09840356
# x2 0.09840356 0.12765662
Stata Equivalent
qui clogit y x1 x2 ,group(id_choice) r
mat li e(V)
symmetric e(V)[2,2]
y: y:
x1 x2
y:x1 .23050262
y:x2 .09840356 .12765662
> Clustered Standard Errors
Here, given that each individual answers 3 questions is highly likely that there is some degree of correlation among individuals; hence cluster corrections should be preferred in such situations. Below I compute the cluster correction in this case and I show the equivalence with the Stata output of clogit , cluster().
id_ind_collapsed <- df$id_ind[!duplicated(mo$model$idx$id_choice,)]
psi_2 <- rowsum(psi, group = id_ind_collapsed )
k_cluster <- NCOL(psi_2)
n_cluster <- NROW(psi_2)
rval_cluster <- (n_cluster/(n_cluster-1))* crossprod(as.matrix(psi_2))
vcov(mo) %*% rval_cluster %*% vcov(mo)
# x1 x2
# x1 0.1766707 0.1007703
# x2 0.1007703 0.1180004
Stata equivalent
qui clogit y x1 x2 ,group(id_choice) cluster(id_ind)
symmetric e(V)[2,2]
y: y:
x1 x2
y:x1 .17667075
y:x2 .1007703 .11800038
The Question:
I would like to accommodate my computations within the sandwich ecosystem, meaning not computing the matrices manually but actually using the sandwich functions. Is it possible to make it work with models in long format like the one described here? For example, providing the meat and bread objects directly to perform the computations? Thanks in advance.
PS: I noted that there is a dedicated bread function in sandwich for mlogit, but I could not spot something like meat for mlogit, but anyways I am probably missing something here...
Why vcovHC does not work for mlogit
The class of HC covariance estimators can just be applied in models with a single linear predictor where the score function aka estimating function is the product of so-called "working residuals" and a regressor matrix. This is explained in some detail in the Zeileis (2006) paper (see Equation 7), provided as vignette("sandwich-OOP", package = "sandwich") in the package. The ?vcovHC also pointed to this but did not explain it very well. I have improved this in the documentation at http://sandwich.R-Forge.R-project.org/reference/vcovHC.html now:
The function meatHC is the real work horse for estimating the meat of HC sandwich estimators - the default vcovHC method is a wrapper calling sandwich and bread. See Zeileis (2006) for more implementation details. The theoretical background, exemplified for the linear regression model, is described below and in Zeileis (2004). Analogous formulas are employed for other types of models, provided that they depend on a single linear predictor and the estimating functions can be represented as a product of “working residual” and regressor vector (Zeileis 2006, Equation 7).
This means that vcovHC() is not applicable to multinomial logit models as they generally use separate linear predictors for the separate response categories. Similarly, two-part or hurdle models etc. are not supported.
Basic "robust" sandwich covariance
Generally, for computing the basic Eicker-Huber-White sandwich covariance matrix estimator, the best strategy is to use the sandwich() function and not the vcovHC() function. The former works for any model with estfun() and bread() methods.
For linear models sandwich(..., adjust = FALSE) (default) and sandwich(..., adjust = TRUE) correspond to HC0 and HC1, respectively. In a model with n observations and k regression coefficients the former standardizes with 1/n and the latter with 1/(n-k).
Stata, however, divides by 1/(n-1) in logit models, see:
Different Robust Standard Errors of Logit Regression in Stata and R. To the best of my knowledge there is no clear theoretical reason for using specifically one or the other adjustment. And already in moderately large samples, this makes no difference anyway.
Remark: The adjustment with 1/(n-1) is not directly available in sandwich() as an option. However, coincidentally, it is the default in vcovCL() without specifying a cluster variable (i.e., treating each observation as a separate cluster). So this is a convenient "trick" if you want to get exactly the same results as Stata.
Clustered covariance
This can be computed "as usual" via vcovCL(..., cluster = ...). For mlogit models you just have to consider that the cluster variable just needs to be provided once (as opposed to stacked several times in long format).
Replicating Stata results
With the data and model from your post:
vcovCL(mo)
## x1 x2
## x1 0.23050261 0.09840356
## x2 0.09840356 0.12765662
vcovCL(mo, cluster = df$id_choice[1:15])
## x1 x2
## x1 0.1766707 0.1007703
## x2 0.1007703 0.1180004

Multivariate detrending under common trend of a time series data in R

I am looking for multivariate detrending under common trend of a time series data in R.
Time series data sample:
> head(d)
T x1 x2 x3 x4
1 1 2 4 3 1
2 2 3 5 4 4
3 3 6 6 6 6
4 4 8 9 10 7
5 5 10 13 20 9
I would like to detrend the above multivariate time series dataset d under common trend. I hope I am clear in explaining the problem that I am facing.
Thanks!
You can use multivariate regression to solve for constants. Because the betas are the same, (i.e. beta in Y=x*beta is an n by 2 matrix with identical rows), you need to account for that constraint. However, you can just string all the Ys together for this.
dvec=as.numeric(d)
n=dim(d)[1]
ncol=dim(d)[2]
x=rep(1:n,ncol)
model<-lm(dvec~x)
Then you can do
d=matrix(model$residuals,nrow=n)

drawing multiple boxplots from imputed data in R

I have an imputed dataset that I'm analysing, and I'm trying to draw boxplots, but I can't wrap my head around the proper procedure.
my data (a sample, original has 20 observations per imputation and 13 vars per group, all values range from 0 to 25):
.imp .id FTE_RM FTE_PD OMZ_RM OMZ_PD
1 1 25 25 24 24
1 2 4 0 2 6
1 3 11 5 3 2
1 4 12 3 3 3
2 1 20 15 15 15
2 2 4 1 2 3
2 3 0 0 0 6
2 4 20 0 0 0
.imp signifies the imputation round, .id the identifer for each observartion.
I want to draw all the FTE_* variables in a single plot (and the `OMZ_* in another), but wonder what to do with all the imputations, can I just include all values? The imputated data now has 500 observations. With for instance an ANOVA I'd need to average the ANOVA results by 5 to get back to 20 observations. But is this needed for a boxplot as well, since I only deal with medians, means, max. and min.?
Such as:
data_melt <- melt(df[grep("^FTE_", colnames(df))])
ggplot(data_melt, aes(x=variable, y=value))+geom_boxplot()
I've played a couple of times with ggplot, but consider myself a complete newbie.
I assume you want to keep the identifier for .imp and .id after melting so rather put:
data_melt <- melt(df,c(".imp",".id"))
For completeness of the dataframe it probably helps to introduce a column that identifies the type - FTE vs. OMZ:
data_melt$type <- ifelse(grepl("FTE",data_melt$variable),"FTE","OMZ")
Having this data.frame you can, for example, facet on the type (alternatively you can just use a simple filter statement on data_melt to restrict to one type):
ggplot(data_melt, aes(x=variable, y=value))+geom_boxplot()+facet_wrap(~type,scales="free_x")
This would look like this.
EDIT: fixed the data mess-up

R, lme: specifying random effects for mixed model of before-after-gradient analysis

I'm trying to measure the biological impacts of an industrial development using a Before-After-Gradient approach. I am using a linear mixed model approach in R, and am having trouble specifying an appropriate model, especially the random effects. I've spent a lot of time researching this, but so far haven't come up with a clear solution--at least not one that I understand. I am new to LMM (and R for that matter) so would welcome any advice.
The response variables (for example, changes in abundance of key species) will be measured as a function of distance from the edge of impact, using plots established at fixed distances along multiple transects ("gradients") radiating out from the edge of the disturbance. Ideally, each plot would be sampled at multiple times both before and after the impact; however, for simplicity I'm starting by assuming the simplest case, where each plot is sampled once before and once after the impact. Assume also that the individual gradients are far enough apart that they can be considered spatially independent.
First, some simulated data. The effect here is linear instead of curvilinear, but you get the idea.
> str(bag)
'data.frame': 30 obs. of 5 variables:
$ Plot : Factor w/ 15 levels "G1-D0","G1-D100",..: 1 2 4 5 3 6 7 9 10 8 ...
$ Gradient: Factor w/ 3 levels "1","2","3": 1 1 1 1 1 2 2 2 2 2 ...
$ Distance: Factor w/ 5 levels "0","100","300",..: 1 2 3 4 5 1 2 3 4 5 ...
$ Period : Factor w/ 2 levels "After","Before": 2 2 2 2 2 2 2 2 2 2 ...
$ response: num 0.633 0.864 0.703 0.911 0.676 ...
> bag
Plot Gradient Distance Period response
1 G1-D0 1 0 Before 0.63258749
2 G1-D100 1 100 Before 0.86422356
3 G1-D300 1 300 Before 0.70262745
4 G1-D700 1 700 Before 0.91056851
5 G1-D1500 1 1500 Before 0.67637353
6 G2-D0 2 0 Before 0.75879579
7 G2-D100 2 100 Before 0.77981992
8 G2-D300 2 300 Before 0.87714158
9 G2-D700 2 700 Before 0.62888739
10 G2-D1500 2 1500 Before 0.83217617
11 G3-D0 3 0 Before 0.87931801
12 G3-D100 3 100 Before 0.81931761
13 G3-D300 3 300 Before 0.74489963
14 G3-D700 3 700 Before 0.68984485
15 G3-D1500 3 1500 Before 0.94942006
16 G1-D0 1 0 After 0.00010000
17 G1-D100 1 100 After 0.05338171
18 G1-D300 1 300 After 0.15846741
19 G1-D700 1 700 After 0.34909588
20 G1-D1500 1 1500 After 0.77138824
21 G2-D0 2 0 After 0.00010000
22 G2-D100 2 100 After 0.05801157
23 G2-D300 2 300 After 0.11422562
24 G2-D700 2 700 After 0.34208601
25 G2-D1500 2 1500 After 0.52606733
26 G3-D0 3 0 After 0.00010000
27 G3-D100 3 100 After 0.05418663
28 G3-D300 3 300 After 0.19295391
29 G3-D700 3 700 After 0.46279103
30 G3-D1500 3 1500 After 0.58556186
As far as I can tell, the fixed effects should be Period (Before,After) and Distance, treating distance as continuous (not a factor) so we can estimate the slope. The interaction between Period and Distance (equivalent to the difference in slopes, before vs. after) measures the impact. I'm still scratching my head over how to specify the random effects. I assume I should control for variation among gradients, as follows:
result <- lme(response ~ Distance + Period + Distance:Period, random=~ 1 | Gradient, data=bag)
However, I suspect I may be missing some source of variation. For example, I'm not sure the above model controls for the re-sampling of individual plots before and after. Any suggestions?
With one sample / gradient, as you have, there's no need to specify random effects or anything about the gradients. You can do this with a straight multiple regression. Once you have multiple measures in each gradient then you can use the model you've specified. Which is that there's an expected main effect of gradient on the intercept of the model but that the effects (slopes) of Distance, Period, and their interactions, should be fixed.
You could specify additional random effects if you expect there to be an appreciable amount of variability among gradients in your other predictors. I'm not sure how you do it in lme, or even if you can, but in lmer an example might be:
lmer(response ~ Distance * Distance:Period + (1 + Distance | Gradient), data=bag)
That would allow the Distance slope to have a fixed effect component and one that varied with gradient. You can look up further specification of random effects but hopefully you see the general idea and then you can decide how complex to make your model.

Resources