Change significance level MannKendall trend test -- R - r

I want to perform Mann-Kendall test at 99% and 90% confidence interval (CI). When running the lines below the analysis will be based on a 95% CI. How to change the code to perform it on 99 and 90% CI?
vec = c(1,2,3,4,5,6,7,8,9,10)
MannKendall(vec)

I cannot comment yet, but I have a question, what do you mean when you say that you need to perform the analysis on a 99 and 95% CI. Do you want to know if your value is significant at the 99 and 90% significance level?
If you just need to know if your score is significant at 99 and 90% significance then r2evans was right, the alpha or significance level is just an arbitrary threshold that you use to define how small your probability should be for you to assume that there "is no effect" or in this case that there is independence between the observations. More importantly, the calculation of the p-value is independent of the confidence level you select, so if you want to know if your result is significant at different confidence levels just compare your p-value at those levels.
I checked how the function works and did not see any indication that the alpha level selected is going to affect the results. if you check the source code of MannKendall(x) (by typing MannKendall without parenthesis or anything) you can see that is just Kendall(1:length(x), x). The function Kendall calculates a statistic tau, that "measures the strength of monotonic association between the vectors x and y", then it returns a p-value by calculating how likely your observed tau is under the assumption that there is no relation between length(x) and x. In other words, how likely it is that you obtain that tau just by chance, as you can see this is not dependent on the confidence level at all, the confidence level only matters at the end when you are deciding how small the probability of your tau should be for you to assume that it cannot have been obtained just by chance.

Related

Output from Linear Mixed Models differs from Estimated Marginal Means

I have a query about the output statistics gained from linear mixed models (using the lmer function) relative to the output statistics taken from the estimated marginal means gained from this model
Essentially, I am running an LMM comparing the within-subjects effect of different contexts (with "Negative" coded as the baseline) on enjoyment ratings. The LMM output suggests that the difference between negative and polite contexts is not significant, with a p-value of .35. See the screenshot below with the relevant line highlighted:
LMM output
However, when I then run the lsmeans function on the same model (with the Holm correction), the p-value for the comparison between Negative and Polite context categories is now .05, and all of the other statistics have changed too. Again, see the screenshot below with the relevant line highlighted:
LSMeans output
I'm probably being dense because my understanding of LMMs isn't hugely advanced, but I've tried to Google the reason for this and yet I can't seem to find out why? I don't think it has anything to do with the corrections because the smaller p-value is observed when the Holm correction is used. Therefore, I was wondering why this is the case, and which value I should report/stick with and why?
Thank you for your help!
Regression coefficients and marginal means are not one and the same. Once you learn these concepts it'll be easier to figure out which one is more informative and therefore which one you should report.
After we fit a regression by estimating its coefficients, we can predict the outcome yi given the m input variables Xi = (Xi1, ..., Xim). If the inputs are informative about the outcome, the predicted yi is different for different Xi. If we average the predictions yi for examples with Xij = xj, we get the marginal effect of the jth feature at the value xj. It's crucial to keep track of which inputs are kept fixed (and at what values) and which inputs are averaged over (aka marginalized out).
In your case, contextCatPolite in the coefficients summary is the difference between Polite and Negative when smileType is set to its reference level (no reward, I'd guess). In the emmeans contrasts, Polite - Negative is the average difference over all smileTypes.
Interactions have a way of making interpretation more challenging and your model includes an interaction between smileType and contextCat. See Interaction analysis in emmeans.
To add to #dipetkov's answer, the coefficients in your LMM are based on treatment coding (sometimes called 'dummy' coding). With the interactions in the model, these coefficients are no longer "main-effects" in the traditional sense of factorial ANOVA. For instance, if you have:
y = b_0 + b_1(X_1) + b_2(X_2) + b_3 (X_1 * X_2)
...b_1 is "the effect of X_1" only when X_2 = 0:
y = b_0 + b_1(X_1) + b_2(0) + b_3 (X_1 * 0)
y = b_0 + b_1(X_1)
Thus, as #dipetkov points out, 1.625 is not the difference between Negative and Polite on average across all other factors (which you get from emmeans). Instead, this coefficient is the difference between Negative and Polite specifically when smileType = 0.
If you use contrast coding instead of treatment coding, then the coefficients from the regression output would match the estimated marginal means, because smileType = 0 would now be on average across smile types. The coding scheme thus has a huge effect on the estimated values and statistical significance of regression coefficients, but it should not effect F-tests based on the reduction in deviance/variance (because no matter how you code it, a given variable explains the same amount of variance).
https://stats.oarc.ucla.edu/spss/faq/coding-systems-for-categorical-variables-in-regression-analysis/

Calculating marginal effects from predicted probabilities of zeroinfl() model object

This plot, which I previously created, shows predicted probabilities of claim onset based on two variables, PIB (scaled across the x-axis) and W, presented as its 75th and 25th percentiles. Confidence intervals for the predictions are presented alongside the two lines.
Probability of Claim Onset
As I theorize that W and PIB have an interactive effect on claim onset, I'd like to see if there is any significance in the marginal effect of W on PIB. Confidence intervals of the predicted probabilities alone cannot confirm that this effect is insignificant, per my reading here (https://www.sociologicalscience.com/download/vol-6/february/SocSci_v6_81to117.pdf).
I know that you can calculate marginal effect easily from predicted probabilities by subtracting one from the other. Yet, I don't understand how I can get the confidence intervals for the marginal effect -- obviously needed to determine when and where my two sets of probabilities are indeed significantly different from one another.
The function that I used for calculating predicted probabilities of the zeroinfl() model object and the confidence intervals of those predicted probabilities is derived from an online posting (https://stat.ethz.ch/pipermail/r-help/2008-December/182806.html). I'm happy to provide more code if needed, but as this is not a question about an error, I am not sure it is needed.
So, I'm not entirely sure this is the correct answer, but to anyone who might come across the same problem I did:
Assuming that the two prediction lines maintain the same variance, you can pool SE before then calculating. See the wikipedia for Pooled Variance to confirm.
SEpooled <- ((pred_1_OR_pred_2$SE * sqrt(simulation_n))^2) * (sqrt((1/simulation_n)+(1/simulation_n)))
low_conf <- (pred_1$PP - pred_2$PP) - (1.96*SEpooled)
high_conf <- (pred_1$PP - pred_2$PP) + (1.96*SEpooled)
##Add this to the plot
lines(pred_1$x_val, low_conf, lty=2)
lines(pred_1$x_val, high_conf, lty=2)

how to decide two variables are correlated

Running the below command in R:
cor.test(loandata$Age,loandata$Losses.in.Thousands)
loandata is the name of the dataset
Age is the independent Variable
Losses.in.Thousands is the dependent variable
Below is the result in R:
Pearson's product-moment correlation
data: loandata$Age and loandata$Losses.in.Thousands
t = -61.09, df = 15288, p-value < 2.2e-16
alternative hypothesis: true correlation is not equal to 0
95 percent confidence interval:
-0.4556139 -0.4301315
sample estimates:
cor
-0.4429622
How to decide whether Age is correlated with Losses.in.Thousand ?
How do we decide by looking at the p-value with alpha = 0.05?
As stated in the other answer, the correlation coefficient produced by cor.test() in the OP is -0.4429. The Pearson correlation coefficient is a measure of the linear association between two variables. It varies between -1.0 (perfect negative linear association) and 1.0 (perfect positive linear association), the magnitude is absolute value of the coefficient, or its distance from 0 (no association).
The t-test indicates whether the correlation is significantly different from zero, given its magnitude relative to its standard error. In this case, the probability value for the t-test, p < 2.2e-16, indicates that we should reject the null hypothesis that the correlation is zero.
That said, the OP question:
How to decide whether Age is correlated with Losses.in.Thousands?
has two elements: statistical significance and substantive meaning.
From the perspective of statistical significance, the t-test indicates that the correlation is non-zero. Since the standard error of a correlation varies inversely with degrees of freedom, the very large number of degrees of freedom listed in the OP (15,288) means that a much smaller correlation would still result in a statistically significant t-test. This is why one must consider substantive significance in addition to statistical significance.
From a substantive significance perspective, interpretations vary. Hemphill 2003 cites Cohen's (1988) rule of thumb for correlation magnitudes in psychology studies:
0.10 - low
0.30 - medium
0.50 - high
Hemphill goes on to conduct a meta analysis of correlation coefficients in psychology studies that he summarized into the following table.
As we can see from the table, Hemphill's empirical guidelines are much less stringent than Cohen's prior recommendations.
Alternative: coefficient of determination
As an alternative, the coefficient of determination, r^2 can be used as a proportional reduction of error measure. In this case, r^2 = 0.1962, and we can interpret it as "If we know one's age, we can reduce our error in predicting losses in thousands by approximately 20%."
Reference: Burt Gerstman's Statistics Primer, San Jose State University.
Conclusion: Interpretation varies by domain
Given the problem domain, if the literature accepts a correlation magnitude of 0.45 as "large," then treat it as large, as is the case in many of the social sciences. In other domains, however, a much higher magnitude is required for a correlation to be considered "large."
Sometimes, even a "small" correlation is substantively meaningful as Hemphill 2003 notes in his conclusion.
For example, even though the correlation between aspirin taking and preventing a heart attack is only r=0.03 in magnitude, (see Rosenthal 1991, p. 136) -- small by most statistical standards -- this value may be socially important and nonetheless influence social policy.
To know if the variables are correlated, the value to look at is cor = -0.4429
In your case, the values are negatively correlated, however the magnitude of correlation isn't very high.
A simple, less confusing way to check if two variables are correlated, you can do:
cor(loandata$Age,loandata$Losses.in.Thousands)
[1] -0.4429622
The null hypothesis of the Pearson test is that the two variables are not correlated: H0 = {rho = 0}
The p-value is the probability that the test's statistic (or its absolute value for a two tailed test) would be beyond the actual observed result (or its absolute value for a two tailed test). You can reject the hypothesis if the p-value is smaller than the confidence level. This is the case in your test, which means the variables are correlated.

Significance level of ACF and PACF in R

I want to obtain the the limits that determine the significance of autocorrelation coefficients and partial autocorrelation coefficients, but I don't know how to do it.
I obtained the Partial autocorrelogram using this function pacf(data). I want that R print me the values indicated in the figure.
The limits that determine the significance of autocorrelation coefficients are: +/- of (exp(2*1.96/√(N-3)-1)/(exp(2*1.96/√(N-3)+1).
Here N is the length of the time series, and I used the 95% confidence level.
The correlation values that correspond to the m % confidence intervals chosen for the test are given by 0 ± i/√N where:
N is the length of the time series
i is the number of standard deviations we expect m % of the correlations to lie within under the null hypothesis that there is zero autocorrelation.
Since the observed correlations are assumed to be normally distributed:
i=2 for a 95% confidence level (acf's default),
i=3 for a 99% confidence level,
and so on as dictated by the properties of a Gaussian distribution
Figure A1, Page 1011 here provides a nice example of how the above principle applies in practice.
After investigating acf and pacf functions and library psychometric with its CIz and CIr functions I found this simple code to do the task:
Compute confidence interval for z Fisher:
ciz = c(-1,1)*(-qnorm((1-alpha)/2)/sqrt(N-3))
here alpha is the confidence level (typically 0.95). N - number of observations.
Compute confidence interval for R:
cir = (exp(2*ciz)-1)/(exp(2*ciz)+1

How to set a weighted least-squares in r for heteroscedastic data?

I'm running a regression on census data where my dependent variable is life expectancy and I have eight independent variables. The data is aggregated be cities, so I have many thousand observations.
My model is somewhat heteroscedastic though. I want to run a weighted least-squares where each observation is weighted by the city’s population. In this case, it would mean that I want to weight the observations by the inverse of the square root of the population. It’s unclear to me, however, what would be the best syntax. Currently, I have:
Model=lm(…,weights=(1/population))
Is that correct? Or should it be:
Model=lm(…,weights=(1/sqrt(population)))
(I found this question here: Weighted Least Squares - R but it does not clarify how R interprets the weights argument.)
From ?lm: "weights: an optional vector of weights to be used in the fitting process. Should be NULL or a numeric vector. If non-NULL, weighted least squares is used with weights weights (that is, minimizing sum(w*e^2)); otherwise ordinary least squares is used." R doesn't do any further interpretation of the weights argument.
So, if what you want to minimize is the sum of (the squared distance from each point to the fit line * 1/sqrt(population) then you want ...weights=(1/sqrt(population)). If you want to minimize the sum of (the squared distance from each point to the fit line * 1/population) then you want ...weights=1/population.
As to which of those is most appropriate... that's a question for CrossValidated!
To answer your question, Lucas, I think you want weights=(1/population). R parameterizes the weights as inversely proportional to the variances, so specifying the weights this way amounts to assuming that the variance of the error term is proportional to the population of the city, which is a common assumption in this setting.
But check the assumption! If the variance of the error term is indeed proportional to the population size, then if you divide each residual by the square root of its corresponding sample size, the residuals should have constant variance. Remember, dividing a random variable by a constant results in the variance being divided by the square of that constant.
Here's how you can check this: Obtain residuals from the regression by
residuals = lm(..., weights = 1/population)$residuals
Then divide the residuals by the square roots of the population variances:
standardized_residuals = residuals/sqrt(population)
Then compare the sample variance among the residuals corresponding to the bottom half of population sizes:
variance1 = var(standardized_residuals[population < median(population)])
to the sample variance among the residuals corresponding to the upper half of population sizes:
variance2 = var(standardized_residuals[population > median(population)])
If these two numbers, variance1 and variance2 are similar, then you're doing something right. If they are drastically different, then maybe your assumption is violated.

Resources