How to make grouped summary statistics based off of densities in R - r

Goal: I would like to generate grouped percentiles for each group (hrzn)
I have the following data
# A tibble: 3,500 x 3
hrzn parameter density
<dbl> <dbl> <dbl>
1 1 0.0183 0.00914
2 1 0.0185 0.00905
3 1 0.0187 0.00897
4 1 0.0189 0.00888
5 1 0.0191 0.00880
6 1 0.0193 0.00872
7 1 0.0194 0.00864
8 1 0.0196 0.00855
9 1 0.0198 0.00847
10 1 0.0200 0.00839
The hrzn is the group, the parameter is a grid of parameter space, and the density is the density for the value in the parameter column.
I would like to generate summary the statistics percentiles 10 to 90 by 10 by hrzn. I am trying to keep this computationally efficient. I know I could sample the parameter with the density as weights, but I am curious is there is a faster way to generate the percentiles from the density without doing a sample.
The data may be obtained with the following
df <- readr::read_csv("https://raw.githubusercontent.com/alexhallam/density_data/master/data.csv")

When I load the data from your csv, each of the 5 groups have identical values for parameter and density:
df
#># A tibble: 3,500 x 3
#> hrzn parameter density
#> <int> <dbl> <dbl>
#> 1 1 0.0183 0.00914
#> 2 1 0.0185 0.00905
#> 3 1 0.0187 0.00897
#> 4 1 0.0189 0.00888
#> 5 1 0.0191 0.00880
#> 6 1 0.0193 0.00872
#> 7 1 0.0194 0.00864
#> 8 1 0.0196 0.00855
#> 9 1 0.0198 0.00847
#>10 1 0.0200 0.00839
#># ... with 3,490 more rows
sapply(1:5, function(x) all(df$parameter[df$hrzn == x] == df$parameter[df$hrzn == 1]))
# [1] TRUE TRUE TRUE TRUE TRUE
sapply(1:5, function(x) all(df$density[df$hrzn == x] == df$density[df$hrzn == 1]))
# [1] TRUE TRUE TRUE TRUE TRUE
I'm not sure if this is a mistake or not, but clearly if you're worried about computation, anything you want to do on all the groups can be done 5 times faster by only doing it on a single group.
Anyway, to get the 10th and 90th centiles for each hrzn, you just need to see which parameter is adjacent to 0.1 and 0.9 on the cumulative distribution function. Let's generalize to working it out for all the groups in case there's an issue with the data or you want to repeat it with different data:
library(dplyr)
df %>%
mutate(hrzn = factor(hrzn)) %>%
group_by(hrzn) %>%
summarise(centile_10 = parameter[which(cumsum(density) > .1)[1]],
centile_90 = parameter[which(cumsum(density) > .9)[1]] )
#># A tibble: 5 x 3
#> hrzn centile_10 centile_90
#> <fct> <dbl> <dbl>
#>1 1 0.0204 0.200
#>2 2 0.0204 0.200
#>3 3 0.0204 0.200
#>4 4 0.0204 0.200
#>5 5 0.0204 0.200
Of course, they're all the same for the reasons mentioned above.
If you're worried about computation time (even though the above only takes a few milliseconds), and you don't mind opaque code, you could take advantage of the ordering to cut the cumsum of your entire density column between 0 and 5 in steps of 0.1, to get all the 10th centiles, like this:
summary <- df[which((diff(as.numeric(cut(cumsum(df$density), seq(0,5,.1))) - 1) != 0)) + 1,]
summary <- summary[-(1:5)*10,]
summary$centile <- rep(1:9*10, 5)
summary
#> # A tibble: 45 x 4
#> hrzn parameter density centile
#> <int> <dbl> <dbl> <dbl>
#> 1 1 0.0204 0.00824 10
#> 2 1 0.0233 0.00729 20
#> 3 1 0.0271 0.00634 30
#> 4 1 0.0321 0.00542 40
#> 5 1 0.0392 0.00453 50
#> 6 1 0.0498 0.00366 60
#> 7 1 0.0679 0.00281 70
#> 8 1 0.103 0.00199 80
#> 9 1 0.200 0.00114 90
#> 10 2 0.0204 0.00824 10
#> # ... with 35 more rows
Perhaps I have misunderstood you and you are actually working in a 5-dimensional parameter space and want to know the parameter values at the 10th and 90th centiles of 5d density. In that case, you can take advantage of the fact that all groups are the same to calculate the 10th and 90th centiles for the 5-d density by simply taking the 5th root of these two centiles:
df %>%
mutate(hrzn = factor(hrzn)) %>%
group_by(hrzn) %>%
summarise(centile_10 = parameter[which(cumsum(density) > .1^.2)[1]],
centile_90 = parameter[which(cumsum(density) > .9^.2)[1]] )
#> # A tibble: 5 x 3
#> hrzn centile_10 centile_90
#> <fct> <dbl> <dbl>
#> 1 1 0.0545 0.664
#> 2 2 0.0545 0.664
#> 3 3 0.0545 0.664
#> 4 4 0.0545 0.664
#> 5 5 0.0545 0.664

Related

Using accumulate function with second to last value as .init argument

I have recently come across an interesting question of calculating a vector values using its penultimate value as .init argument plus an additional vector's current value. Here is the sample data set:
set.seed(13)
dt <- data.frame(id = rep(letters[1:2], each = 5), time = rep(1:5, 2), ret = rnorm(10)/100)
dt$ind <- if_else(dt$time == 1, 120, if_else(dt$time == 2, 125, as.numeric(NA)))
id time ret ind
1 a 1 0.005543269 120
2 a 2 -0.002802719 125
3 a 3 0.017751634 NA
4 a 4 0.001873201 NA
5 a 5 0.011425261 NA
6 b 1 0.004155261 120
7 b 2 0.012295066 125
8 b 3 0.002366797 NA
9 b 4 -0.003653828 NA
10 b 5 0.011051443 NA
What I would like to calculate is:
ind_{t} = ind_{t-2}*(1+ret_{t})
I tried the following code. Since .init is of no use here I tried the nullify the original .init and created a virtual .init but unfortunately it won't drag the newly created values (from third row downward) into calculation:
dt %>%
group_by(id) %>%
mutate(ind = c(120, accumulate(3:n(), .init = 125,
~ .x * 1/.x * ind[.y - 2] * (1 + ret[.y]))))
# A tibble: 10 x 4
# Groups: id [2]
id time ret ind
<chr> <int> <dbl> <dbl>
1 a 1 0.00554 120
2 a 2 -0.00280 125
3 a 3 0.0178 122.
4 a 4 0.00187 125.
5 a 5 0.0114 NA
6 b 1 0.00416 120
7 b 2 0.0123 125
8 b 3 0.00237 120.
9 b 4 -0.00365 125.
10 b 5 0.0111 NA
I was wondering if there was a tweak I could make to this code and make it work completely.
I would appreciate your help greatly in advance
Use a state vector consisting of the current value of ind and the prior value of ind. That way the prior state contains the second prior value of ind. We encode that into complex values with the real part equal to ind and the imaginary part equal to the prior value of ind. At the end we take the real part.
library(dplyr)
library(purrr)
dt %>%
group_by(id) %>%
mutate(result = c(ind[1],
Re(accumulate(.x = tail(ret, -2),
.f = ~ Im(.x) * (1 + .y) + Re(.x) * 1i,
.init = ind[2] + ind[1] * 1i)))) %>%
ungroup
giving:
# A tibble: 10 x 5
id time ret ind result
<chr> <int> <dbl> <dbl> <dbl>
1 a 1 0.00554 120 120
2 a 2 -0.00280 125 125
3 a 3 0.0178 NA 122.
4 a 4 0.00187 NA 125.
5 a 5 0.0114 NA 124.
6 b 1 0.00416 120 120
7 b 2 0.0123 125 125
8 b 3 0.00237 NA 120.
9 b 4 -0.00365 NA 125.
10 b 5 0.0111 NA 122.
Variation
This variation eliminates the complex numbers and uses a vector of 2 elements in place of each complex number with the first number corresponding to the real part in the prior solution and the second number of each pair corresponding to the imaginary part. This could be extended to cases where we need more than 2 numbers per state and where the dependence involves all of the last N values but for the question here there is the downside of the extra line of code to extract the result from the list of pairs of numbers which is more involved than using Re in the prior solution.
dt %>%
group_by(id) %>%
mutate(result = c(ind[1],
accumulate(.x = tail(ret, -2),
.f = ~ c(.x[2] * (1 + .y), .x[1]),
.init = ind[2:1])),
result = map_dbl(result, first)) %>%
ungroup
Check
We check that the results above are correct. Alternately this could be used as a straight forward solution.
calc <- function(ind, ret) {
for(i in seq(3, length(ret))) ind[i] <- ind[i-2] * (1 + ret[i])
ind
}
dt %>%
group_by(id) %>%
mutate(result = calc(ind, ret)) %>%
ungroup
giving:
# A tibble: 10 x 5
id time ret ind result
<chr> <int> <dbl> <dbl> <dbl>
1 a 1 0.00554 120 120
2 a 2 -0.00280 125 125
3 a 3 0.0178 NA 122.
4 a 4 0.00187 NA 125.
5 a 5 0.0114 NA 124.
6 b 1 0.00416 120 120
7 b 2 0.0123 125 125
8 b 3 0.00237 NA 120.
9 b 4 -0.00365 NA 125.
10 b 5 0.0111 NA 122.
I would have done it by creating dummy groups for each sequence, so that it can be done for any number of 'N'. Demonstrating it on a new elaborated data
df <- data.frame(
stringsAsFactors = FALSE,
grp = c("a","a","a","a",
"a","a","a","a","a","b","b","b","b","b",
"b","b","b","b"),
rate = c(0.082322056,
0.098491104,0.07294593,0.08741672,0.030179747,
0.061389031,0.011232314,0.08553277,0.091272669,
0.031577847,0.024039791,0.091719552,0.032540636,
0.020411727,0.094521716,0.081729178,0.066429708,
0.04985793),
ind = c(11000L,12000L,
13000L,NA,NA,NA,NA,NA,NA,10000L,13000L,12000L,
NA,NA,NA,NA,NA,NA)
)
df
#> grp rate ind
#> 1 a 0.08232206 11000
#> 2 a 0.09849110 12000
#> 3 a 0.07294593 13000
#> 4 a 0.08741672 NA
#> 5 a 0.03017975 NA
#> 6 a 0.06138903 NA
#> 7 a 0.01123231 NA
#> 8 a 0.08553277 NA
#> 9 a 0.09127267 NA
#> 10 b 0.03157785 10000
#> 11 b 0.02403979 13000
#> 12 b 0.09171955 12000
#> 13 b 0.03254064 NA
#> 14 b 0.02041173 NA
#> 15 b 0.09452172 NA
#> 16 b 0.08172918 NA
#> 17 b 0.06642971 NA
#> 18 b 0.04985793 NA
library(tidyverse)
N = 3
df %>% group_by(grp) %>%
group_by(d = row_number() %% N, .add = TRUE) %>%
mutate(ind = accumulate(rate[-1] + 1, .init = ind[1], ~ .x * .y))
#> # A tibble: 18 x 4
#> # Groups: grp, d [6]
#> grp rate ind d
#> <chr> <dbl> <dbl> <dbl>
#> 1 a 0.0823 11000 1
#> 2 a 0.0985 12000 2
#> 3 a 0.0729 13000 0
#> 4 a 0.0874 11962. 1
#> 5 a 0.0302 12362. 2
#> 6 a 0.0614 13798. 0
#> 7 a 0.0112 12096. 1
#> 8 a 0.0855 13420. 2
#> 9 a 0.0913 15057. 0
#> 10 b 0.0316 10000 1
#> 11 b 0.0240 13000 2
#> 12 b 0.0917 12000 0
#> 13 b 0.0325 10325. 1
#> 14 b 0.0204 13265. 2
#> 15 b 0.0945 13134. 0
#> 16 b 0.0817 11169. 1
#> 17 b 0.0664 14147. 2
#> 18 b 0.0499 13789. 0
Alternate answer in dplyr (using your own data modified a bit only)
set.seed(13)
dt <- data.frame(id = rep(letters[1:2], each = 5), time = rep(1:5, 2), ret = rnorm(10)/100)
dt$ind <- ifelse(dt$time == 1, 12000, ifelse(dt$time == 2, 12500, as.numeric(NA)))
library(dplyr, warn.conflicts = F)
dt %>% group_by(id) %>%
group_by(d= row_number() %% 2, .add = TRUE) %>%
mutate(ind = cumprod(1 + duplicated(id) * ret)* ind[1])
#> # A tibble: 10 x 5
#> # Groups: id, d [4]
#> id time ret ind d
#> <chr> <int> <dbl> <dbl> <dbl>
#> 1 a 1 0.00554 12000 1
#> 2 a 2 -0.00280 12500 0
#> 3 a 3 0.0178 12213. 1
#> 4 a 4 0.00187 12523. 0
#> 5 a 5 0.0114 12353. 1
#> 6 b 1 0.00416 12000 0
#> 7 b 2 0.0123 12500 1
#> 8 b 3 0.00237 12028. 0
#> 9 b 4 -0.00365 12454. 1
#> 10 b 5 0.0111 12161. 0

R dplyr::ntile vs ggplot2::cut_number

I need to divide a vector in quantiles, ie. bins with the same number of observations. I am currently looking at this two functions dplyr::ntile and ggplot2::cut_number. It looks like they do roughly the same thing. The only difference is that ntile gives back the quantile to which the observation belongs, ie. 1, 2, 3, ..., whereas cut_number returns the limits of the interval, ie. (0, 0.5], (0.5, 1], ... .
I did some experiments and it looks like I roughly get the same answers:
library(tidyverse)
df <- tibble(vec = runif(1000))
df %>% mutate(vec_cut = cut_number(vec, 10)) %>% count(vec_cut)
#> # A tibble: 10 x 2
#> vec_cut n
#> <fct> <int>
#> 1 [7.29e-05,0.0905] 100
#> 2 (0.0905,0.211] 100
#> 3 (0.211,0.325] 100
#> 4 (0.325,0.423] 100
#> 5 (0.423,0.5] 100
#> 6 (0.5,0.602] 100
#> 7 (0.602,0.699] 100
#> 8 (0.699,0.806] 100
#> 9 (0.806,0.91] 100
#> 10 (0.91,0.997] 100
df %>% mutate(vec_tile = ntile(vec, 10)) %>%
group_by(vec_tile) %>%
summarise(count = n(),
min = min(vec),
max = max(vec))
#> `summarise()` ungrouping output (override with `.groups` argument)
#> # A tibble: 10 x 4
#> vec_tile count min max
#> <int> <int> <dbl> <dbl>
#> 1 1 100 0.0000729 0.0905
#> 2 2 100 0.0905 0.210
#> 3 3 100 0.211 0.324
#> 4 4 100 0.325 0.422
#> 5 5 100 0.423 0.499
#> 6 6 100 0.501 0.602
#> 7 7 100 0.602 0.699
#> 8 8 100 0.702 0.806
#> 9 9 100 0.806 0.910
#> 10 10 100 0.911 0.997
The problem is that sometimes cut_numbers fails where ntile does not.
vec <- c(rep(0,100), seq(1:100))
table(cut_number(vec, 10))
#> Error: Insufficient data values to produce 10 bins.
#> Run `rlang::last_error()` to see where the error occurred.
table(ntile(vec,10))
#> 1 2 3 4 5 6 7 8 9 10
#> 20 20 20 20 20 20 20 20 20 20
I could use ntile, however it is nice to have the interval limits and not just the index of the quantiles. Am I doing something wrong?

Calculate confidence intervals (binomial) within data frame

I want to get the confidence intervals for proportions within my tibble. Is there a way of doing this?
library(tidyverse)
library(Hmisc)
library(broom)
df <- tibble(id = c(1, 2, 3, 4, 5, 6),
count = c(4, 1, 22, 4545, 33, 23),
n = c(22, 65, 34, 6323, 35, 45))
Which looks like this:
# A tibble: 6 x 3
id count n
<dbl> <dbl> <dbl>
1 1 4 22
2 2 1 65
3 3 22 34
4 4 4545 6323
5 5 33 35
6 6 23 45
Using binconf from Hmisc and tidy from broom the solution could be from any package:
The intervals for the first row:
tidy(binconf(4, 22))
# A tibble: 1 x 4
.rownames PointEst Lower Upper
<chr> <dbl> <dbl> <dbl>
1 "" 0.182 0.0731 0.385
I have tried using map in purrr but get errors:
map(df, tidy(binconf(count, n)))
Error in x[i] : object of type 'closure' is not subsettable
I could just calculate them using dplyr but I get values below zero (e.g. row 2) or above one (e.g row 5), which I don't want. e.g.
df %>%
mutate(prop = count / n) %>%
mutate(se = (sqrt(prop * (1-prop)/n))) %>%
mutate(lower = prop - (se*1.96)) %>%
mutate(upper = prop + (se*1.96))
# A tibble: 6 x 7
id count n prop se lower upper
<dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 1 4 22 0.182 0.0822 0.0206 0.343
2 2 1 65 0.0154 0.0153 -0.0145 0.0453
3 3 22 34 0.647 0.0820 0.486 0.808
4 4 4545 6323 0.719 0.00565 0.708 0.730
5 5 33 35 0.943 0.0392 0.866 1.02
6 6 23 45 0.511 0.0745 0.365 0.657
Is there a good way of doing this? I did have a look at the confint_tidy() function, but could not get that to work. Any ideas?
It may not be tidy but
> as.tibble(cbind(df, binconf(df$count, df$n)))
# A tibble: 6 x 6
id count n PointEst Lower Upper
<dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 1 4 22 0.182 0.0731 0.385
2 2 1 65 0.0154 0.000789 0.0821
3 3 22 34 0.647 0.479 0.785
4 4 4545 6323 0.719 0.708 0.730
5 5 33 35 0.943 0.814 0.984
6 6 23 45 0.511 0.370 0.650
seems to work

How can I incorporate sampling weights into an analysis of Likert scale survey questions?

I am analysing survey data, whose questions are in the form of the Likert Scale. I used auxiliary census data to calculate weights for different age groups within my sample. I would now like to use these weights to correct my sample data and then display the distribution for each question differentiated for each age group.
Any help is appreciated!
The conventional way of doing this in R would be to use the survey package and/or the srvyr package which allows you to use dplyr-style syntax while relying on the survey package to correctly handle weighting and complex survey designs.
Below is a small example of analyzing Likert data using the srvyr package.
# Create example data ----
library(survey)
library(srvyr)
set.seed(1999)
## Data frame of responses and grouping information
likert_response_options <- c("1 - Strongly Disagree", "2", "3", "4", "5 - Strongly Agree")
data_df <- data.frame(
group_vbl = factor(sample(LETTERS[1:4], 20, replace = TRUE), LETTERS[1:4]),
likert_item = factor(x = sample(likert_response_options, 20, replace = TRUE),
levels = likert_response_options),
weights = rnorm(20, mean = 1, sd = 0.1)
)
## Create a survey design object from the data frame
my_survey_design <- as_survey_design(data_df, weights = weights)
# Create weighted summaries ----
my_survey_design %>%
group_by(group_vbl, likert_item) %>%
summarize(proportion = survey_mean())
#> # A tibble: 20 x 5
#> group_vbl likert_item proportion proportion_low proportion_upp
#> <fct> <fct> <dbl> <dbl> <dbl>
#> 1 A 1 - Strongly Disagree 0.300 -0.252 0.851
#> 2 A 2 0 0 0
#> 3 A 3 0.700 0.149 1.25
#> 4 A 4 0 0 0
#> 5 A 5 - Strongly Agree 0 0 0
#> 6 B 1 - Strongly Disagree 0.127 -0.130 0.384
#> 7 B 2 0.146 -0.143 0.435
#> 8 B 3 0.259 -0.0872 0.606
#> 9 B 4 0.468 0.0577 0.878
#> 10 B 5 - Strongly Agree 0 0 0
#> 11 C 1 - Strongly Disagree 0 0 0
#> 12 C 2 0.241 -0.213 0.696
#> 13 C 3 0.292 -0.221 0.804
#> 14 C 4 0.250 -0.216 0.716
#> 15 C 5 - Strongly Agree 0.217 -0.205 0.639
#> 16 D 1 - Strongly Disagree 0 0 0
#> 17 D 2 0.529 0.0906 0.967
#> 18 D 3 0 0 0
#> 19 D 4 0.159 -0.156 0.474
#> 20 D 5 - Strongly Agree 0.312 -0.0888 0.713

Create Multiple 2-dimensional Tables from Multiple Columns in R Using dplyr

I'm looking for an efficient way to create multiple 2-dimension tables from an R dataframe of chi-square statistics. The code below builds on this answer to a previous question of mine about getting chi-square stats by groups. Now I want to create tables from the output by group. Here's what I have so far using the hsbdemo data frame from the UCLA R site:
ml <- foreign::read.dta("https://stats.idre.ucla.edu/stat/data/hsbdemo.dta")
str(ml)
'data.frame': 200 obs. of 13 variables:
$ id : num 45 108 15 67 153 51 164 133 2 53 ...
$ female : Factor w/ 2 levels "male","female": 2 1 1 1 1 2 1 1 2 1 ...
$ ses : Factor w/ 3 levels "low","middle",..: 1 2 3 1 2 3 2 2 2 2 ...
$ schtyp : Factor w/ 2 levels "public","private": 1 1 1 1 1 1 1 1 1 1 ...
$ prog : Factor w/ 3 levels "general","academic",..: 3 1 3 3 3 1 3 3 3 3 ...
ml %>%
dplyr::select(prog, ses, schtyp) %>%
table() %>%
apply(3, chisq.test, simulate.p.value = TRUE) %>%
lapply(`[`, c(6,7,9)) %>%
reshape2::melt() %>%
tidyr::spread(key = L2, value = value) %>%
dplyr::rename(SchoolType = L1) %>%
dplyr::arrange(SchoolType, prog) %>%
dplyr::select(-observed, -expected) %>%
reshape2::acast(., prog ~ ses ~ SchoolType ) %>%
tbl_df()
The output after the last arrange statement produces this tibble (showing only the first five rows):
prog ses SchoolType expected observed stdres
1 general low private 0.37500 2 3.0404678
2 general middle private 3.56250 3 -0.5187244
3 general high private 2.06250 1 -1.0131777
4 academic low private 1.50000 0 -2.5298221
5 academic middle private 14.25000 14 -0.2078097
It's easy to select one column, for example, stdres, and pass it to acast and tbl_df, which gets pretty much what I'm after:
# A tibble: 3 x 6
low.private middle.private high.private low.public middle.public high.public
<dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 3.04 -0.519 -1.01 1.47 -0.236 -1.18
2 -2.53 -0.208 1.50 -0.940 -2.06 3.21
3 -0.377 1.21 -1.06 -0.331 2.50 -2.45
Now I can repeat these steps for observed and expected frequencies and bind them by rows, but that seems inefficient. The output would observed frequencies stacked on expected, stacked on the standardized residuals. Something like this:
low.private middle.private high.private low.public middle.public high.public
<dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 2 3 1 14 17 8
2 0 14 10 19 30 32
3 0 2 0 12 29 7
4 0.375 3.56 2.06 10.4 17.6 10.9
5 1.5 14.2 8.25 21.7 36.6 22.7
6 0.125 1.19 0.688 12.9 21.7 13.4
7 3.04 -0.519 -1.01 1.47 -0.236 -1.18
8 -2.53 -0.208 1.50 -0.940 -2.06 3.21
9 -0.377 1.21 -1.06 -0.331 2.50 -2.45
Seems there ought to be a way to do this without repeating the code for each column, probably by creating and processing a list. Thanks in advance.
Might this be the answer?
ml1 <- ml %>%
dplyr::select(prog, ses, schtyp) %>%
table() %>%
apply(3, chisq.test, simulate.p.value = TRUE) %>%
lapply(`[`, c(6,7,9)) %>%
reshape2::melt()
ml2 <- ml1 %>%
dplyr::mutate(type=paste(ses, L1, sep=".")) %>%
dplyr::select(-ses, -L1) %>%
tidyr::spread(type, value)
This gives you
prog L2 high.private high.public low.private low.public middle.private middle.public
1 general expected 2.062500 10.910714 0.3750000 10.4464286 3.5625000 17.6428571
2 general observed 1.000000 8.000000 2.0000000 14.0000000 3.0000000 17.0000000
3 general stdres -1.013178 -1.184936 3.0404678 1.4663681 -0.5187244 -0.2360209
4 academic expected 8.250000 22.660714 1.5000000 21.6964286 14.2500000 36.6428571
5 academic observed 10.000000 32.000000 0.0000000 19.0000000 14.0000000 30.0000000
6 academic stdres 1.504203 3.212431 -2.5298221 -0.9401386 -0.2078097 -2.0607058
7 vocation expected 0.687500 13.428571 0.1250000 12.8571429 1.1875000 21.7142857
8 vocation observed 0.000000 7.000000 0.0000000 12.0000000 2.0000000 29.0000000
9 vocation stdres -1.057100 -2.445826 -0.3771236 -0.3305575 1.2081594 2.4999085
I am not sure I understand completely what you are out after... But basically, create a new variable of SES and school type, and gather based on that. And obviously, reorder it as you wish :-)

Resources