Get data distribution given a dataframe R - r

I'm new to R and I would like to "see" which distribution my data follow.
My data represent the entering passengers at 10 bus stops for 1 month at the same time each day.
Date Stop.1 Stop.2 Stop.3 Stop.4 Stop.5 ...
2-9 3 26 11 3 0
3-9 2 44 23 0 12
4-9 26 16 0 0 4
...
My goal is to understand the distribution of the "entering" passengers at each stop so that I can create a simulation for the arrivals at each Stop.

Density Plot
You can plot the probability density function to let you "see" the distribution. Assuming df is a dataframe containing the information:
d <- density(df$Stop.1) # returns the density data for Stop 1.
plot(d) # plots the results
Summary statistics
Or you can use the basicStats function from the fBasics package to give some useful summary statistics.
library(fBasics)
basicStats(df)

Related

Random sampling with normal distribution from existing data in R

I have a large dataset of individuals who have rated some items (x1:x10). For each individual. the ratings have been combined into a total score (ranging 0-5). Now, I would like to draw two subsamples with the same sample size, in which the total score has a specific mean (1.5 and 3) and follows a normal distribution. Individuals might be part of both subsamples.
A guess to solve this, sampling with the outlined specifications from a vector (the total score) will work. Unfortunately, I have found only found different ways to draw random samples from a vector, but not a way to sample around a specific mean.
EDIT:
As pointed out I normal distribution would not be possible. Rather than I am looking way to sample a binomial distribution (directly from the data, without the work around of creating a similar distribution and matching).
You can't have normally distributed data on a discrete scale with hard limits. A sample drawn from a normal distribution with a mean between 0 and 5 would be symmetrical around the mean, would take coninuous rather than discrete values and would have a non-zero probability of containing values of less than zero and more than 5.
You want your sample to contain discrete values between zero and five and to have a central tendency around the mean. To emulate scores with a particular mean you need to sample from the binomial distribution using rbinom.
get_n_samples_averaging_m <- function(n, m)
{
rbinom(n, 5, m/5)
}
Now you can do
samp <- get_n_samples_averaging_m(40, 1.5)
print(samp)
# [1] 1 3 2 1 3 2 2 1 1 1 1 2 0 3 0 0 2 2 2 3 1 1 1 1 1 2 1 2 0 1 4 2 0 2 1 3 2 0 2 1
mean(samp)
# [1] 1.5

How to construct ROC curve in r with a small clinical dataset

I need help with ROC curve in R. My data is not difficult, but I don't know how to get teh ROC curve and AUC. I typed the dataset here if you need to have a look. The cutoff comes from CMMS score (eg. <5/10/20 then the patient have dementia).
Table - Relationship of clinical dementia to outcome on the Mini-Mental Status Test
CMMS score Nondemented Demented
0–5 0 2
6–10 0 1
11–15 3 4
16–20 9 5
21–25 16 3
26–30 18 1
Total 46 16
Please let me know any ideas. Thank you.

two phase sample weights

I am using survey package to analyze data from a two-phase survey. The first phase included ~5000 cases, and the second ~2700. Weights were calculated beforehand to adjust for several variables (ethnicity, sex, etc.) AND(as far as I understand) for the decrease in sample size when performing phase 2.
I'm interested in proportions of binary variables, e.g. suicides in sick vs. healthy individuals.
An example of a simple output I receive in the overall sample:
table (df$schiz,df$suicide)
0 1
0 4857 8
1 24 0
An example of a simple output I receive in the second phase sample only:
table (df2$schiz,df2$suicide)
0 1
0 2685 5
1 24 0
And within the phase two sample with the weights included:
dfw<-svydesign(ids=~1,data=df2, weights=df2$weights)
svytable (~schiz+suicide, design=dfW)
suicide
schiz 0 1
0 2701.51 2.67
1 18.93 0.00
My question is: Shouldn't the weights correct for the decrease in N when moving from phase 1 to phase 2? i.e. shouldn't the total N of the second table after correction be ~5000 cases, and not ~2700 as it is now?

No zeros predicted from zeroinfl object in R?

I created a zero inflated negative binomial model and want to investigate how many of the zeros were partitioned out to sampling or structural zeros. How do I implement this in R. The example code on the zeroinfl page is not clear to me.
data("bioChemists", package = "pscl")
fm_zinb2 <- zeroinfl(art ~ . | ., data = bioChemists, dist = "negbin")
table(round(predict(fm_zinb2, type="zero")))
> 0 1
> 891 24
table(round(bioChemists$art))
> 0 1 2 3 4 5 6 7 8 9 10 11 12 16 19
> 275 246 178 84 67 27 17 12 1 2 1 1 2 1 1
What is this telling me?
When I do the same for my data I get a read out that just has the sample size listed under the 1? Thanks
The details are in the paper by Zeileis (2008) available at https://www.jstatsoft.org/article/view/v027i08/v27i08.pdf
It's a little bit of work (a couple of years and your question was still unanswered) to gather together all the explanations of what the predict function does for each model in the pscl library, and it's buried (pp 19,23) in the mathematical expression of the likelihood function (equations 7, 8). I have interpreted your question to mean that you want/need to know how to use different types of predict:
What is the expected count? (type="response")
What is the (conditional) expected probability of an excess zero? (type="zero")
What is the (marginal) expected probability of any count? (type="prob")
And finally how many predicted zeros are excess (eg sampling) rather than regression based (ie structural)?
To read in the data that comes with the pscl package:
data("bioChemists", package = "pscl")
Then fit a zero-inflated negative binomial model:
fm_zinb2 <- zeroinfl(art ~ . | ., data = bioChemists, dist = "negbin")
If you wish to predict the expected values, then you use
predict(fm_zinb2, type="response")[29:31]
29 30 31
0.5213736 1.7774268 0.5136430
So under this model, the expected number of articles published in the last 3 years of a PhD is one half for biochemists 29 and 31 and nearly 2 for biochemist 30.
But I believe that you are after the probability of an excess zero (in the point mass at zero). This command does that and prints off the values for items in row 29 to 31 (yes I went fishing!):
predict(fm_zinb2, type="zero")[29:31]
It produces this output:
29 30 31
0.58120120 0.01182628 0.58761308
So the probability that the 29th item is an excess zero (which you refer to as a sampling zero, i.e. a non-structural zero and hence not explained by the covariates) is 58%, for the 30th is 1.1%, and for the 31st is 59%. So that's two biochemists who are predicted to have zero publications, and this is in excess of those that can be explained by the negative binomial regression on the various covariates.
And you have tabulated these predicted probabilities across the whole dataset
table(round(predict(fm_zinb2, type="zero")))
0 1
891 24
So your output tells you that only 24 biochemists were likely to be an excess zero, ie with a predicted probability of an excess zero that was over one-half (due to rounding).
It would perhaps be easier to interpret if you tabulated into bins of 10 points on the percentage scale
table(cut(predict(fm_zinb2, type="zero"), breaks=seq(from=0,to=1,by=0.1)))
to give
(0,0.1] (0.1,0.2] (0.2,0.3] (0.3,0.4] (0.4,0.5] (0.5,0.6]
751 73 34 23 10 22
(0.6,0.7] (0.7,0.8] (0.8,0.9] (0.9,1]
2 0 0 0
So you can see that 751 biochemists were unlikely to be an excess zero, but 22 biochemists have a chance of between 50-60% of being an excess zero, and only 2 have a higher chance (60-70%). No one was extremely likely to be an excess zero.
Graphically, this can be shown in a histogram
hist(predict(fm_zinb2, type="zero"), col="slateblue", breaks=seq(0,0.7,by=.02))
You tabulated the actual number of counts per biochemist (no rounding necessary, as these are counts):
table(bioChemists$art)
0 1 2 3 4 5 6 7 8 9 10 11 12 16 19
275 246 178 84 67 27 17 12 1 2 1 1 2 1 1
Who is the special biochemist with 19 publications?
most_pubs <- max(bioChemists$art)
most_pubs
extreme_biochemist <- bioChemists$art==most_pubs
which(extreme_biochemist)
You can obtain the estimated probability that each biochemist has any number of pubs, exactly 0 and up to the maximum, here an unbelievable 19!
preds <- predict(fm_zinb2, type="prob")
preds[extreme_biochemist,]
and you can look at this for our one special biochemist, who had 19 publications (using base R plotting here, but ggplot is more beautiful)
expected <- predict(fm_zinb2, type="response")[extreme_biochemist]
# barplot returns the midpoints for counts 0 up to 19
midpoints<-barplot(preds[extreme_biochemist,],
xlab="Predicted #pubs", ylab="Relative chance among biochemists")
# add 1 because the first count is 0
abline(v=midpoints[19+1],col="red",lwd=3)
abline(v=midpoints[round(expected)+1],col="yellow",lwd=3)
and this shows that although we expect 4.73 publications for biochemist 915, under this model more likelihood is given to 2-3 pubs, nowhere near the actual 19 pubs (red line).
Getting back to the question, for biochemist 29,
the probability of an excess zero is
pzero <- predict(fm_zinb2, type="zero")
pzero[29]
29
0.5812012
The probability of a zero, overall (marginally) is
preds[29,1]
[1] 0.7320871
So the proportion of predicted probability of a zero that is excess versus structural (ie explained by the regression) is:
pzero[29]/preds[29,1]
29
0.7938962
Or the additional probability of a zero, beyond the chance of an excess zero is:
preds[29,1] - pzero[29]
29
0.1508859
The actual number of publications for biochemist 29 is
bioChemists$art[29]
[1] 0
So the reason that biochemist is predicted to have zero publications is very little explained by the regression (20%) and mostly not (ie excess, 80%).
And overall, we see that for most biochemists, this is not the case. Our biochemist 29 is unusual, since their chance of zero pubs is mostly excess, ie inexplicable by the regression. We can see this via:
hist(pzero/preds[,1], col="blue", xlab="Proportion of predicted probability of zero that is excess")
which gives you:

Forecasting multivariate data with Auto.arima

I am trying to forecasts sales of weekly data. The data consists of these variables week no, sales, avgprice/perunit , holiday(whether that week contains holiday or not) and promotion(if any promotion is going) of 104 weeks. So basically the last 6 obs of data set looks as:
Week Sales Avg.price.unit Holiday Promotion
101 8,970 50 0 1
102 17,000 50 1 1
103 23,000 80 1 0
104 28,000 180 1 0
105 176 1 0
106 75 0 1
Now I want to forecast for 105th and 106th week. So I created univariate time series x by using ts function and then ran auto.arima function by issuing the command:
x<-ts(sales$Sales, frequency=7)
> fit<-auto.arima(x,xreg=external, test=c("kpss","adf","pp"),seasonal.test=c("ocsb","ch"),allowdrift=TRUE)
>fit
ARIMA(1,1,1)
**Coefficients:
ar1 ma1 Avg.price.unit Holiday Promotion
-0.1497 -0.9180 0.0363 -10.4181 -4.8971
s.e. 0.1012 0.0338 0.0646 5.1999 5.5148
sigma^2 estimated as 479.3: log likelihood=-465.09
AIC=942.17 AICc=943.05 BIC=957.98**
Now when I want to forecast the values for last 2 weeks(105th and 1o6th) I supply the external values of regressors for 105th and 106th week:
forecast(fit, xreg=ext)
where ext consists of future values of regressors for last 2 weeks.
The output comes as:
Point Forecast Lo 80 Hi 80 Lo 95 Hi 95
15.85714 44.13430 16.07853 72.19008 1.226693 87.04191
16.00000 45.50166 17.38155 73.62177 2.495667 88.50765
The output looks incorrect since the forecasted value of sales is very less as the sales value of previous values(training) values are generallly in range of thousands.
If anyone can tell me why it is coming incorrect/unexpected, that would be great.
If you knew a priori that certain weeks of the year or certain events in the year were possibly important you could form a Transfer Function that couild be useful. You might have to include some ARIMA structure to deal with short-term autoregressive structure AND/OR some Pulse/Level Shift/Local Trends to deal with unspecified deterministic series ( omitted variables ). If you would like to post all of your data I would be glad to demonstrate that for you thus providing ground zero help. Alternatively you can email it to me at dave#autobox.com and I will analyze it and post the data and the results to the list. Other commentators on this question might also want to do the same for comparative analytics.
Where are the 51 weekly dummies in your model? Without them you have no way to capture seasonality.

Resources