Reformatting cumulative data - r

I have data with the cumulative households against the cumulative wealth they posses. I've attached an image of a small amount of the data. Using the R diff() function allows me to get what % of households hold what % of wealth which is good.
I aim to find the Gini index of my data which I first need to get in a format where the households are evenly spaced. There are 20000 rows or so meaning I need to standardise the wealth owned to 0.005% at a time or something like that so as to attain a true distribution of wealth with households (1,2, etc) and not the percentage of households.
EDIT:
structure(list(ï..0.002 = c(0.005, 0.007, 0.017, 0.025, 0.027,
0.037, 0.047, 0.057, 0.067, 0.075, 0.081, 0.09, 0.1, 0.107, 0.116,
0.124, 0.13, 0.138, 0.145, 0.151), X.0.002 = c(-0.004, -0.005,
-0.008, -0.01, -0.01, -0.013, -0.015, -0.017, -0.019, -0.02,
-0.021, -0.022, -0.024, -0.025, -0.026, -0.027, -0.027, -0.028,
-0.029, -0.03)), row.names = c(NA, 20L), class = "data.frame")
Data OCR'd with https://ocr.space/ :
Obs wealth households
1 -0.002 0.002
2 -0.004 0.005
3 -0.005 0.007
4 -0.008 0.017
5 -0.01 0.025
6 -0.01 0.027
7 -0.013 0.037
8 -0.015 0.047
9 -0.017 0.057
10 -0.019 0.067
11 -0.02 0.075
12 -0.021 0.081
13 -0.022 0.09
14 -0.024 0.1

I suggest you used an interpolation to get your data into an evenly spaced form using the approx function.
interpolation <- approx(x = df$cum_hh, y = df$cum_wealth, xout = seq(0, 1, by = 0.00005))
interpolation$x ## evenly spaced cumulative households
interpolation$y ## interpolated cumulative wealth

Related

function to iterated through summation

I have two dataset.
One looks like this:
model <- data.frame(Variable = c("P_ALC", "P_ALC_BEER", "P_ALC_LIQUOR",
"P_ALC_WINE", "P_BEAN", "P_LUNCH"), Estimate = c(0.0728768079454515,
0.189831156431574, 0.182511704261063, 0.176711987960571, 0.0108000123096659,
-0.00463222009211804))
The other looks like this:
data <- data.frame(P_ALC = c(0.044, 0.001, 2.295, 0.55, 0.063, 1.604,
0.584, 0.211, 0, 0.244), P_ALC_BEER = c(0.02, 0, 0.177, 0.53,
0.02, 0.53, 0, 0.01, 0, 0.02), P_ALC_LIQUOR = c(0.022, 0, 0,
0, 0.022, 1.069, 0.583, 0, 0, 0.046), P_ALC_WINE = c(0, 0, 2.118,
0.02, 0.02, 0.004, 0, 0.202, 0, 0.177), P_BEAN = c(0.18, 0.133,
0.182, 0.128, 0.06, 0.408, 0.066, 0.18, 0.757, 0.068), P_LUNCH = c(0.137,
0.058, 0.107, 0.249, 0.037, 0.161, 0.542, 0.033, 0.029, 0.44))
I want to do the following calculation:
score <- model[1,2] + model[2,2]*data[model[2,1]] + model[3,2]*data[model[3,1]]+ model[4,2]*data[model[4,1]] + model[5,2]*data[model[5,1]] + model[6,2]*data[model[6,1]]
This works fine for my toy dataset. But in actuality the model data frame is around 80 rows long so i want to write a function that will keep counting through the rows without having to copy and past.
We may loop across the columns specified in 'Variable' column of 'model' except the first element, multiply with the corresponding 'Variable' value, get the rowSums and add the first element of 'Estimate'
library(dplyr)
data %>%
mutate(score = model$Estimate[1] + rowSums(across(all_of(model$Variable[-1]),
~ . * model$Estimate[match(cur_column(), model$Variable)])))
-output
P_ALC P_ALC_BEER P_ALC_LIQUOR P_ALC_WINE P_BEAN P_LUNCH score
1 0.044 0.020 0.022 0.000 0.180 0.137 0.08199808
2 0.001 0.000 0.000 0.000 0.133 0.058 0.07404454
3 2.295 0.177 0.000 2.118 0.182 0.107 0.48222287
4 0.550 0.530 0.000 0.020 0.128 0.249 0.17725054
5 0.063 0.020 0.022 0.020 0.060 0.037 0.08469954
6 1.604 0.530 1.069 0.004 0.408 0.161 0.37295980
7 0.584 0.000 0.583 0.000 0.066 0.542 0.17748327
8 0.211 0.010 0.000 0.202 0.180 0.033 0.11226208
9 0.000 0.000 0.000 0.000 0.757 0.029 0.08091808
10 0.244 0.020 0.046 0.177 0.068 0.440 0.11504322

What are the aes() values when making a boxplot using the ggplot package?

I'm trying to make a boxplot with the ggplot2 package in r studo. I've been reading around on past ggplot2 questions but this is just so basic I can't find it covered in detail... I'm bad at using r.
This is my very basic code that I'm trying to use but I don't know my x and y values?
ggplot(data, aes(x,y)) + geom_boxplot()
So, my y values are Pearson Coefficents which is either 0-1 but I'm struggling to put that in as a range. Then I'm just confused because my x values are just 4 different conditions. Should I use a vector? e.g. c(drug 6hr, control, drug 24hr, control)
I succesfully made a basic boxplot using boxplot() but I am using ggplot2 because I want to show every individual value on the plot using jitter which I have also failed to use.
Sorry I have only been using R for about 6 months! Trying to learn as much as I can.
My data:
drug 6hr, control, drug 24hr, control
0.876 0.707 0.709 0.521
0.084 0.275 0.468 0.795
0.911 0.985 0.565 0.150
0.503 0.584 0.693 0.766
0.363 0.102 0.775 0.640
0.219 0.888 0.724 0.516
0.041 0.277 0.877 0.216
0.206 0.974 0.771 0.434
0.787 0.725 0.671 0.916
0.896 0.873 0.443 0.693
0.396 0.641 0.525 0.471
0.250 0.184 0.467 0.537
0.094 0.453 0.641 0.910
0.750 0.748 0.634 0.007
0.026 0.263 0.069 0.725
0.109 0.227 0.535
0.780 0.811 0.241
0.710 0.568 0.029
0.676 0.114 0.237
0.610 0.260 0.241
0.170 0.728 0.405
0.025 0.815 0.914
0.022 0.329 0.766
0.039 0.714
0.034 0.096
0.402 0.988
0.649
0.564
0.190
0.844
0.920
0.744
0.871
0.565
You need to reshape your dataframe into a longer format and then it will makes things easier forg etting your boxplot with ggplot2.
Here, I'm using pivot_longer function from tidyr package to transform your data into two columns with the first one being the name of the condition and the second one contains values:
library(tidyr)
library(dplyr)
DF %>% pivot_longer(everything(), names_to = "var",values_to = "values")
# A tibble: 136 x 2
var values
<chr> <dbl>
1 drug_6hr 0.876
2 Control_6 0.707
3 drug_24hr 0.709
4 Control_24 0.521
5 drug_6hr 0.084
6 Control_6 0.275
7 drug_24hr 0.468
8 Control_24 0.795
9 drug_6hr 0.911
10 Control_6 0.985
# … with 126 more rows
Then, you can add the graphic part to the pipe (symbol %>%) sequence by defining your dataframe into ggplot with various aes arguments and use geom_boxplot and geom_jitter functions:
library(tidyr)
library(dplyr)
library(ggplot2)
DF %>% pivot_longer(everything(), names_to = "var",values_to = "values") %>%
ggplot(aes(x = var, y = values, fill = var, color = var))+
geom_boxplot(alpha = 0.2)+
geom_jitter()
Alternatively, to remove the warning messages based on the presence of NA values, you can filter out NA values by adding a filter function between the pivot_longer and ggplot:
DF %>% pivot_longer(everything(), names_to = "var",values_to = "values") %>%
filter(!is.na(values)) %>%
ggplot(aes(x = var, y = values, fill = var, color = var))+
geom_boxplot(alpha = 0.2)+
geom_jitter()
Does it answer your question ?
Reproducible example
I edited your example in order to make it better for reading into R. I also modify colnames as pointed out by #akrun:
structure(list(drug_6hr = c(0.876, 0.084, 0.911, 0.503, 0.363,
0.219, 0.041, 0.206, 0.787, 0.896, 0.396, 0.25, 0.094, 0.75,
0.026, 0.109, 0.78, 0.71, 0.676, 0.61, 0.17, 0.025, 0.022, 0.039,
0.034, 0.402, 0.649, 0.564, 0.19, 0.844, 0.92, 0.744, 0.871,
0.565), Control_6 = c(0.707, 0.275, 0.985, 0.584, 0.102, 0.888,
0.277, 0.974, 0.725, 0.873, 0.641, 0.184, 0.453, 0.748, 0.263,
NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA,
NA, NA, NA), drug_24hr = c(0.709, 0.468, 0.565, 0.693, 0.775,
0.724, 0.877, 0.771, 0.671, 0.443, 0.525, 0.467, 0.641, 0.634,
0.069, 0.227, 0.811, 0.568, 0.114, 0.26, 0.728, 0.815, 0.329,
0.714, 0.096, 0.988, NA, NA, NA, NA, NA, NA, NA, NA), Control_24 = c(0.521,
0.795, 0.15, 0.766, 0.64, 0.516, 0.216, 0.434, 0.916, 0.693,
0.471, 0.537, 0.91, 0.007, 0.725, 0.535, 0.241, 0.029, 0.237,
0.241, 0.405, 0.914, 0.766, NA, NA, NA, NA, NA, NA, NA, NA, NA,
NA, NA)), row.names = c(NA, -34L), class = c("data.table", "data.frame"
))

confidence intervals for a tibble in wide format [duplicate]

This question already has answers here:
Aggregate / summarize multiple variables per group (e.g. sum, mean)
(10 answers)
Group by multiple columns and sum other multiple columns
(7 answers)
Closed 3 years ago.
I have a large tibble, an example of which is shown below. It has seven predictors (V4 to V10) and nine outcomes (w1, w2, w3, mw, i1, i2, i3, mi, p2).
What I am trying to do is to create confidence intervals for the outcomes in columns 2 (w1) to 10 (p2)
vars w1 w2 w3 mw i1 i2 i3 mi p2
V4 0.084 0.017 0.061 0.054 22.800 4.570 16.700 14.700 0.367
V5 0.032 0.085 0.039 0.052 8.840 23.100 10.700 14.200 0.367
V6 0.026 0.066 0.022 0.038 7.030 18.000 6.070 10.400 0.367
V7 0.097 0.020 0.066 0.061 26.300 5.420 18.100 16.600 0.367
V8 0.048 0.071 0.043 0.054 13.100 19.300 11.800 14.700 0.367
V9 0.018 0.111 0.020 0.050 4.800 30.300 5.440 13.500 0.367
V10 0.053 0.020 0.103 0.058 14.300 5.330 28.000 15.900 0.367
V4 0.084 0.017 0.060 0.054 22.400 4.420 16.200 14.300 0.373
V5 0.032 0.072 0.036 0.047 8.630 19.300 9.760 12.500 0.373
V6 0.030 0.076 0.023 0.043 8.080 20.500 6.070 11.500 0.373
V7 0.080 0.021 0.087 0.063 21.500 5.720 23.300 16.800 0.373
V8 0.053 0.090 0.034 0.059 14.100 24.000 9.110 15.700 0.373
V9 0.016 0.101 0.025 0.048 4.410 27.100 6.790 12.800 0.373
V10 0.060 0.022 0.100 0.061 16.000 5.950 26.800 16.300 0.373
When I group_by variables (vars) in dplyr and run quantiles on three of the outcomes (as a test), it does not give me what I'm looking for. Instead of giving me the confidence intervals for the three outcomes, it just gives me one confidence interval as
seen below:
+ group_by(vars) %>%
+ do(data.frame(t(quantile(c(.$w1, .$w2, .$w3), probs = c(0.025, 0.975)))))
# A tibble: 7 x 3
# Groups: variables [7]
variables X2.5 X97.5
1 V10 0.0202 0.103
2 V4 0.017 0.084
3 V5 0.032 0.0834
4 V6 0.0221 0.0748
5 V7 0.0201 0.0958
6 V8 0.0351 0.0876
7 V9 0.0162 0.110
In short, what I'm looking for is something like the table below, where I get the confidence intervals for each outcome.
w1 w2 w3
vars X2.5 X97.5 vars X2.5 X97.5 vars X2.5 X97.5
V10 0.020 0.103 V10 0.020 0.103 V10 0.020 0.103
V4 0.017 0.084 V4 0.017 0.084 V4 0.017 0.084
V5 0.032 0.083 V5 0.032 0.083 V5 0.032 0.083
V6 0.022 0.075 V6 0.022 0.075 V6 0.022 0.075
V7 0.020 0.096 V7 0.020 0.096 V7 0.020 0.096
V8 0.035 0.088 V8 0.035 0.088 V8 0.035 0.088
V9 0.016 0.110 V9 0.016 0.110 V9 0.016 0.110
Any pointers in the right direction would be greatly appreciated. I've read on StackOverflow, but can't seem to find an answer that addresses what I want to do.
Here are two ways.
Base R.
aggregate(df1[-1], list(df1[[1]]), quantile, probs = c(0.025, 0.975))
With the tidyverse.
library(dplyr)
df1 %>%
group_by(vars) %>%
mutate_at(vars(w1:p2), quantile, probs = c(0.025, 0.975))
Note that in the second way, the output format is different, the first quantile (0.025) is in the first rows and the second (0.975) in the last rows.
Data.
df1 <-
structure(list(vars = structure(c(2L, 3L, 4L,
5L, 6L, 7L, 1L, 2L, 3L, 4L, 5L, 6L, 7L, 1L),
.Label = c("V10", "V4", "V5", "V6", "V7", "V8",
"V9"), class = "factor"), w1 = c(0.084, 0.032,
0.026, 0.097, 0.048, 0.018, 0.053, 0.084,
0.032, 0.03, 0.08, 0.053, 0.016, 0.06),
w2 = c(0.017, 0.085, 0.066, 0.02, 0.071, 0.111,
0.02, 0.017, 0.072, 0.076, 0.021, 0.09, 0.101,
0.022), w3 = c(0.061, 0.039, 0.022, 0.066,
0.043, 0.02, 0.103, 0.06, 0.036, 0.023, 0.087,
0.034, 0.025, 0.1), mw = c(0.054, 0.052, 0.038,
0.061, 0.054, 0.05, 0.058, 0.054, 0.047, 0.043,
0.063, 0.059, 0.048, 0.061), i1 = c(22.8, 8.84,
7.03, 26.3, 13.1, 4.8, 14.3, 22.4, 8.63, 8.08,
21.5, 14.1, 4.41, 16), i2 = c(4.57, 23.1, 18, 5.42,
19.3, 30.3, 5.33, 4.42, 19.3, 20.5, 5.72, 24, 27.1,
5.95), i3 = c(16.7, 10.7, 6.07, 18.1, 11.8, 5.44,
28, 16.2, 9.76, 6.07, 23.3, 9.11, 6.79, 26.8),
mi = c(14.7, 14.2, 10.4, 16.6, 14.7, 13.5, 15.9,
14.3, 12.5, 11.5, 16.8, 15.7, 12.8, 16.3),
p2 = c(0.367, 0.367, 0.367, 0.367, 0.367, 0.367,
0.367, 0.373, 0.373, 0.373, 0.373, 0.373, 0.373,
0.373)), class = "data.frame",
row.names = c(NA, -14L))
Another possibility: melt/pivot to long format; compute summaries; then cast/pivot to wide format
library(tidyverse)
df2 <- (df1
%>% pivot_longer(-vars,"outcome","value")
%>% group_by(vars,outcome)
%>% summarise(lwr=quantile(value,0.025),upr=quantile(value,0.975))
)
df2 %>% pivot_wider(names_from=outcome,values_from=c(lwr,upr))
Unfortunately the columns aren't in the order you want; I can't think of a quick fix (you can select() with variables in the order you want ...

Select multiple columns from ordered dataframe

I would like to calculate the mean value for each of my variables, and then I would like to create a list of the names of variables with the 3 largest mean values.
I will then use this list to subset my dataframe and will only include the 3 selected variables in additional analysis.
I'm close, but can't quite seem to write the code efficiently. And I'm trying to use pipes for the first time.
Here is a simplified dataset.
FA1 <- c(0.68, 0.79, 0.65, 0.72, 0.79, 0.78, 0.77, 0.67, 0.77, 0.7)
FA2 <- c(0.08, 0.12, 0.07, 0.13, 0.09, 0.12, 0.13, 0.08, 0.17, 0.09)
FA3 <- c(0.1, 0.06, 0.08, 0.09, 0.06, 0.08, 0.09, 0.09, 0.06, 0.08)
FA4 <- c(0.17, 0.11, 0.19, 0.13, 0.14, 0.14, 0.13, 0.16, 0.11, 0.16)
FA5 <- c(2.83, 0.9, 3.87, 1.55, 1.91, 1.46, 1.68, 2.5, 3.0, 1.45)
df <- data.frame(FA1, FA2, FA3, FA4, FA5)
And here is the piece of code I've written that doesn't quite get me what I want.
colMeans(df) %>% rank()
First identify the three columns with the highest means. I use colMeans to calculate the column means. I then sort the means by decreasing order and only keep the first three, which are the three largest.
three <-sort(colMeans(df),decreasing = TRUE)[1:3]
Then, keep only those columns.
df[,names(three)]
> df[,names(three)]
FA5 FA1 FA4
1 2.83 0.68 0.17
2 0.90 0.79 0.11
3 3.87 0.65 0.19
4 1.55 0.72 0.13
5 1.91 0.79 0.14
6 1.46 0.78 0.14
7 1.68 0.77 0.13
8 2.50 0.67 0.16
9 3.00 0.77 0.11
10 1.45 0.70 0.16

polr(..) ordinal logistic regression in R

I'm experiencing some trouble when using the polr function.
Here is a subset of the data I have:
# response variable
rep = factor(c(0.00, 0.04, 0.06, 0.13, 0.15, 0.05, 0.07, 0.00, 0.06, 0.04, 0.05, 0.00, 0.92, 0.95, 0.95, 1, 0.97, 0.06, 0.06, 0.03, 0.03, 0.08, 0.07, 0.04, 0.08, 0.03, 0.07, 0.05, 0.05, 0.06, 0.04, 0.04, 0.08, 0.04, 0.04, 0.04, 0.97, 0.03, 0.04, 0.02, 0.04, 0.01, 0.06, 0.06, 0.07, 0.08, 0.05, 0.03, 0.06,0.03))
# "rep" is discrete variable which represents proportion so that it varies between 0 and 1
# It is discrete proportions because it is the proportion of TRUE over a finite list of TRUE/FALSE. example: if the list has 3 arguments, the proportions value can only be 0,1/3,2/3 or 1
# predicted variable
set.seed(10)
pred.1 = sample(x=rep(1:5,10),size=50)
pred.2 = sample(x=rep(c('a','b','c','d','e'),10),size=50)
# "pred" are discrete variables
# polr
polr(rep~pred.1+pred.2)
The subset I gave you works fine ! But my entire data set and some subset of it does not work ! And I can't find anything in my data that differ from this subset except the quantity. So, here is my question: Is there any limitations in terms of the number of levels for example that would yield to the following error message:
Error in optim(s0, fmin, gmin, method = "BFGS", ...) :
the initial value in 'vmin' is not finite
and the notification message:
glm.fit: fitted probabilities numerically 0 or 1 occurred
(I had to translate these two messages into english so they might no be 100% correct)
I sometimes only get the notification message and sometimes everything is fine depending on the what subset of my data I use.
My rep variable have a total of 101 levels for information (and contain nothing else than the kind of data I described)
So it is a terrible question that I am asking becaue I can't give you my full dataset and I don't know where is the problem. Can you guess where my problem comes from thanks to these informations ?
Thank you
Following #joran's advice that your problem is probably the 100-level factor, I'm going to recommend something that probably isn't statistically valid but will probably still be effective in your particular situation: don't use logistic regression at all. Just drop it. Perform a simple linear regression and then discretize your output as necessary using a specialized rounding procedure. Give it a shot and see how well it works for you.
rep.v = c(0.00, 0.04, 0.06, 0.13, 0.15, 0.05, 0.07, 0.00, 0.06, 0.04, 0.05, 0.00, 0.92, 0.95, 0.95, 1, 0.97, 0.06, 0.06, 0.03, 0.03, 0.08, 0.07, 0.04, 0.08, 0.03, 0.07, 0.05, 0.05, 0.06, 0.04, 0.04, 0.08, 0.04, 0.04, 0.04, 0.97, 0.03, 0.04, 0.02, 0.04, 0.01, 0.06, 0.06, 0.07, 0.08, 0.05, 0.03, 0.06,0.03)
set.seed(10)
pred.1 = factor(sample(x=rep(1:5,10),size=50))
pred.2 = factor(sample(x=rep(c('a','b','c','d','e'),10),size=50))
model = lm(rep.v~as.factor(pred.1) + as.factor(pred.2))
output = predict(model, newx=data.frame(pred.1, pred.2))
# Here's one way you could accomplish the discretization/rounding
f.levels = unique(rep.v)
rounded = sapply(output, function(x){
d = abs(f.levels-x)
f.levels[d==min(d)]
}
)
>rounded
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
0.06 0.07 0.00 0.06 0.15 0.00 0.07 0.00 0.13 0.06 0.06 0.15 0.15 0.92 0.15 0.92 0.15 0.15 0.06 0.06 0.00 0.07 0.15 0.15
25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48
0.15 0.15 0.00 0.00 0.15 0.00 0.15 0.15 0.07 0.15 0.00 0.07 0.15 0.00 0.15 0.15 0.00 0.15 0.15 0.15 0.92 0.15 0.15 0.00
49 50
0.13 0.15
orm from rms can handle ordered outcomes with a large number of categories.
library(rms)
orm(rep ~ pred.1 + pred.2)

Resources