I have this table:
group
May 1990
Jun 1990
Jul 1990
1
581
552
465
2
193
184
176
3
207
177
165
Total
981
913
806
I want to calculate percent on row level from the row total.
group
May 1990
Jun 1990
Jul 1990
1
0.59
0.60
0.58
2
0.19
0.21
0.22
3
0.21
0.19
0.20
Total
1
1
1
I got this far for now, but is not what I want.
df <- data.frame(group=c('1','2','3','Total'),may_1990=c(581,193,207,981),jun_1990=c(552,184,177,913),jul_1990=c(465,176,165,806))
total <- df %>% slice_tail(n = 1)
z <- df %>% rowwise() %>% mutate(across(where(is.numeric), ~ .x/total[-1]))
With across:
library(dplyr)
df %>%
mutate(across(where(is.numeric), ~ .x / .x[group == "Total"]))
group may_1990 jun_1990 jul_1990
1 1 0.5922528 0.6046002 0.5769231
2 2 0.1967380 0.2015334 0.2183623
3 3 0.2110092 0.1938664 0.2047146
4 Total 1.0000000 1.0000000 1.0000000
With the nature of your data, this could also work if you prefer base R:
df[-1] <- sapply(df[-1], proportions) * 2
I think the easy way to achieve this kind of table is to use table() function:
df <- data.frame(group=c('1','2','3','Total'),may_1990=c(581,193,207,981),jun_1990=c(552,184,177,913),jul_1990=c(465,176,165,806))
# Compute proportions for the central data
prop = proportions(as.matrix(df[-4,-1]), 2)
# Add total at the column level (margin = 1)
prop = addmargins(prop, 1)
# Create the final table
df_end = data.frame(
group=c('1','2','3','Total'),
prop
)
You obtain this:
group may_1990 jun_1990 jul_1990
1 1 0.5922528 0.6046002 0.5769231
2 2 0.1967380 0.2015334 0.2183623
3 3 0.2110092 0.1938664 0.2047146
Sum Total 1.0000000 1.0000000 1.0000000
Related
I used the freq function of frequency package to get frequency percent on my dataset$MoriskyAdherence, then R gives me percent values with rounding. I need more decimal places.
MoriskyAdherence=dataset$MoriskyAdherence
freq(MoriskyAdherence)
The result is:
The Percent values are 35.5, 41.3,23.8. The sum of them is 100.1.
The exact amounts should be 35.5, 41.25, 23.75.
What should I do?
I used sprintf, as.data.frame,formatC, and some other function to deal with it.But...
The function freq returns a character data frame, and has no option to adjust the number of decimal places. However, it is easy to recreate the table however you want it. For example, I have written this function, which will give you the same result but with two decimal places instead of one:
freq2 <- function(data_frame)
{
df <- frequency::freq(data_frame)
lapply(df, function(x)
{
n <- suppressWarnings(as.numeric(x$Freq))
sum_all <- as.numeric(x$Freq[nrow(x)])
raw_percent <- suppressWarnings(100 * n / sum_all)
t_row <- grep("Total", x[,2])[1]
valid_percent <- suppressWarnings(100*n / as.numeric(x$Freq[t_row]))
x$Percent <- format(round(raw_percent, 2), nsmall = 2)
x$'Valid Percent' <- format(round(valid_percent, 2), nsmall = 2)
x$'Cumulative Percent' <- format(round(cumsum(valid_percent), 2), nsmall = 2)
x$'Cumulative Percent'[t_row:nrow(x)] <- ""
x$'Valid Percent'[(t_row + 1):nrow(x)] <- ""
return(x)
})
}
Now instead of
freq(MoriskyAdherence)
#> Building tables
#> |===========================================================================| 100%
#> $`x:`
#> x label Freq Percent Valid Percent Cumulative Percent
#> 2 Valid High Adherence 56 35.0 35.0 35.0
#> 3 Low Adherence 66 41.3 41.3 76.3
#> 4 Medium Adherence 38 23.8 23.8 100.0
#> 41 Total 160 100.0 100.0
#> 1 Missing <blank> 0 0.0
#> 5 <NA> 0 0.0
#> 7 Total 160 100.0
you can do
freq2(MoriskyAdherence)
#> Building tables
#> |===========================================================================| 100%
#> $`x:`
#> x label Freq Percent Valid Percent Cumulative Percent
#> 2 Valid High Adherence 56 35.00 35.00 35.00
#> 3 Low Adherence 66 41.25 41.25 76.25
#> 4 Medium Adherence 38 23.75 23.75 100.00
#> 41 Total 160 100.00 100.00
#> 1 Missing <blank> 0 0.00
#> 5 <NA> 0 0.00
#> 7 Total 160 100.00
which is exactly what you were looking for.
Two (potential) solutions:
Solution #1:
Make changes inside the function freq. This can be done by retrieving the function's code with the command freq (without round brackets), or by retrieving the code, with comments, from https://rdrr.io/github/wilcoxa/frequencies/src/R/freq.R.
My hunch is that to obtain more decimals, changes must be implemented at this point in the code:
# create a list of frequencies
message("Building tables")
all_freqs <- lapply_pb(names(x), function(y, x1 = as.data.frame(x), maxrow1 = maxrow, trim1 = trim){
makefreqs(x1, y, maxrow1, trim1)
})
Solution #2:
If you're only after percentages with more decimals, you can use aggregate. Let's suppose your data has this structure: a dataframe with two variables, one numeric, one a factor by which you want to group:
set.seed(123)
Var1 <- sample(LETTERS[1:4], 10, replace = T)
Var2 <- sample(10:100, 10, replace = T)
df <- data.frame(Var1, Var2)
Var1 Var2
1 B 97
2 D 51
3 B 71
4 D 62
5 D 19
6 A 91
7 C 32
8 D 13
9 C 39
10 B 96
Then to obtain your percentages by factor, you would use aggregatethus:
aggregate(Var2 ~ Var1, data = df, function(x) sum(x)/sum(Var2)*100)
Var1 Var2
1 A 15.93695
2 B 46.23468
3 C 12.43433
4 D 25.39405
You can control the number of decimals by using round:
aggregate(Var2 ~ Var1, data = df, function(x) round(sum(x)/sum(Var2)*100,3))
Let's suppose that a company has 3 Bosses and 20 Employees, where each Employee has done n_Projects with an overall Performance in percentage:
> df <- data.frame(Boss = sample(1:3, 20, replace=TRUE),
Employee = sample(1:20,20),
n_Projects = sample(50:100, 20, replace=TRUE),
Performance = round(sample(1:100,20,replace=TRUE)/100,2),
stringsAsFactors = FALSE)
> df
Boss Employee n_Projects Performance
1 3 8 79 0.57
2 1 3 59 0.18
3 1 11 76 0.43
4 2 5 85 0.12
5 2 2 75 0.10
6 2 9 66 0.60
7 2 19 85 0.36
8 1 20 79 0.65
9 2 17 79 0.90
10 3 14 77 0.41
11 1 1 78 0.97
12 1 7 72 0.52
13 2 6 62 0.69
14 2 10 53 0.97
15 3 16 91 0.94
16 3 4 98 0.63
17 1 18 63 0.95
18 2 15 90 0.33
19 1 12 80 0.48
20 1 13 97 0.07
The CEO asks me to compute the quality of the work for each boss. However, he asks for a specific calculation: Each Performance value has to have a weight equal to the n_Project value over the total n_Project for that boss.
For example, for Boss 1 we have a total of 604 n_Projects, where the project 1 has a Performance weight of 0,13 (78/604 * 0,97 = 0,13), project 3 a Performance weight of 0,1 (59/604 * 0,18 = 0,02), and so on. The sum of these Performance weights are the Boss performance, that for Boss 1 is 0,52. So, the final output should be like this:
Boss total_Projects Performance
1 604 0.52
2 340 0.18 #the values for boss 2 are invented
3 230 0.43 #the values for boss 3 are invented
However, I'm still struggling with this:
df %>%
group_by(Boss) %>%
summarise(total_Projects = sum(n_Projects),
Weight_Project = n_Projects/sum(total_Projects))
In addition to this problem, can you give me any feedback about this problem (my code, specifically) or any recommendation to improve data-manipulations skills? (you can see in my profile that I have asked a lot of questions like this, but still I'm not able to solve them on my own)
We can get the sum of product of `n_Projects' and 'Performance' and divide by the 'total_projects'
library(dplyr)
df %>%
group_by(Boss) %>%
summarise(total_projects = sum(n_Projects),
Weight_Project = sum(n_Projects * Performance)/total_projects)
# or
# Weight_Project = n_Projects %*% Performance/total_projects)
# A tibble: 3 x 3
# Boss total_projects Weight_Project
# <int> <int> <dbl>
#1 1 604 0.518
#2 2 595 0.475
#3 3 345 0.649
Adding some more details about what you did and #akrun's answer :
You must have received the following error message :
df %>%
group_by(Boss) %>%
summarise(total_Projects = sum(n_Projects),
Weight_Project = n_Projects/sum(total_Projects))
## Error in summarise_impl(.data, dots) :
## Column `Weight_Project` must be length 1 (a summary value), not 7
This tells you that the calculus you made for Weight_Project does not yield a unique value for each Boss, but 7. summarise is there to summarise several values into one (by means, sums, etc.). Here you just divide each value of n_Projects by sum(total_Projects), but you don't summarise it into a single value.
Assuming that what you had in mind was first calculating the weight for each performance, then combining it with the performance mark to yield the weighted mean performance, you can proceed in two steps :
df %>%
group_by(Boss) %>%
mutate(Weight_Performance = n_Projects / sum(n_Projects)) %>%
summarise(weighted_mean_performance = sum(Weight_Performance * Performance))
The mutate statement preserves the number of total rows in df, but sum(n_Projects) is calculated for each Boss value thanks to group_by.
Once, for each row, you have a project weight (which depends on the boss), you can calculate the weighted mean — which is a mean thus a summary value — with summarise.
A more compact way that still lets appear the weighted calculus would be :
df %>%
group_by(Boss) %>%
summarise(weighted_mean_performance = sum((n_Projects / sum(n_Projects)) * Performance))
# Reordering to minimise parenthesis, which is #akrun's answer
df %>%
group_by(Boss) %>%
summarise(weighted_mean_performance = sum(n_Projects * Performance) / sum(n_Projects))
My data looks like this:
set <- c(1,1,1,2,2,3,3,3,3,3,4,4)
density <- c(1,3,3,1,3,1,1,1,3,3,1,3)
counts <- c(100,2,4,76,33,12,44,13,54,36,65,1)
data <- data.frame(set,density,counts)
data$set <- as.factor(data$set)
data$density <- as.factor(data$density)
Within a given set there are two levels of densities "1" or "3". For a given set, I want to divide all possible combinations of counts of density "1" and density "3". I then want to print the original density associated with density "1", the ratio, and the set
For example, the result for the first few rows should look like:
set counts ratio
1 100 50 #100/2
1 100 25 #100/4
2 76 2.3 #76/33
3 12 0.22 #12/54
3 12 0.33 #12/36
3 44 0.8148 #44/54
...
I thought I could achieve it by dplyr..but it seems a little too complicated for dplyr.
It looks like the comments get you most of the way there. Here's a dplyr solution. With left_join each of the density1's get matched up with all density3's in the same set, providing output in line with your specification.
# Edited below to use dplyr syntax; my base syntax had a typo
library(dplyr)
data_combined <- data %>% filter(density == 1) %>%
# Match each 1 w/ each 3 in the set
left_join(data %>% filter(density == 3), by = "set") %>%
mutate(ratio = counts.x / counts.y) %>%
select(set, counts.x, counts.y, ratio)
data_combined
# set counts.x counts.y ratio
#1 1 100 2 50.0000000
#2 1 100 4 25.0000000
#3 2 76 33 2.3030303
#4 3 12 54 0.2222222
#5 3 12 36 0.3333333
#6 3 44 54 0.8148148
#7 3 44 36 1.2222222
#8 3 13 54 0.2407407
#9 3 13 36 0.3611111
#10 4 65 1 65.0000000
My dataset looks like this:
Year Risk Resource Utilization Band Percent
2014 0 .25
2014 1 .19
2014 2 .17
2014 3 .31
2014 4 .06
2014 5 .01
2015 0 .23
2015 1 .21
2015 2 .19
2015 3 .31
2015 4 .06
2015 5 .31
I am attempting to compare percentage change year to year for the dataset I am working with. For example 2014 decreased 2% in 2015. So far, I have created a loop that puts each by year into bins and runs the calculation. The issue I am having is that the loop is indexing each loop as 1 so I have a bunch of repeating 1s next to my calculations. Here is the code I have been using, any help is much appreciated
Results.data <- data.frame()
head(data)
percent <- 0
baseyear <- 0
nextyear <- 0
bin <- 0
yearPlus1 <-0
bin2 <-0
percent1 <-0
percent2 <-0
percentDif <-0
for(i in 1:nrow(data))
{
percent[i] <- data$PERCENT[i]
baseyear[i] <- as.numeric(data$YEAR_RISK[i])
bin[i] <- as.numeric(data$RESOURCE_UTILIZATION_BAND[i])
#print(percent[i])
#print(baseyear[i])
#print(bin[i])
}
for (k in 1:nrow(data))
{
for (j in 1:nrow(data))
{
yearPlus1 <- as.numeric(baseyear[j])-1
firstYear <- as.numeric(baseyear[k])
bin2 <-bin[j]
bin1 <- bin[k]
percent1 <- as.numeric(percent[k])
percent2 <- as.numeric(percent[j])
if(firstYear==yearPlus1 && bin1==bin2)
{
percentDif <- percent2 - percent1
print(percentDif)
Results.data <- rbind(Results.data, c(percentDif))
}
}
}
If I understand your question, you can use grouping and vectorization to avoid loops. Here's an example using the dplyr package.
The code below first sorts by Year_Risk so that the data are ordered properly by time. Then we group by Resource_Utilization_Band so that we can get results separately for each level of Resource_Utilization_Band. Finally, we calculate the difference in Percent from year to year. The lag function returns the previous value in a sequence. (Instead of lag, we could have done Change = c(NA, diff(Percent)) as well.) All of these operations are chained one after the other using the dplyr chaining operator (%>%).
(Note that when I imported your data, I also changed your column names by adding underscores to make them legal R column names.)
library(dplyr)
# Year-over-year change within each Resource_Utilization_Band
# (Assuming your starting data frame is called "dat")
dat %>% arrange(Year_Risk) %>%
group_by(Resource_Utilization_Band) %>%
mutate(Change = Percent - lag(Percent))
Year_Risk Resource_Utilization_Band Percent Change
1 2014 0 0.25 NA
2 2014 1 0.19 NA
3 2014 2 0.17 NA
4 2014 3 0.31 NA
5 2014 4 0.06 NA
6 2014 5 0.01 NA
7 2015 0 0.23 -0.02
8 2015 1 0.21 0.02
9 2015 2 0.19 0.02
10 2015 3 0.31 0.00
11 2015 4 0.06 0.00
12 2015 5 0.31 0.30
I have a dataset that has several hundred variables with hundreds of observations. Each observation has a unique identifier, and is associated with one of approximately 50 groups. It looks like so (the variables I'm not concerned about have been ignored below):
ID Group Score
1 10 400
2 11 473
3 12 293
4 13 382
5 14 283
6 11 348
7 11 645
8 13 423
9 10 434
10 10 124
etc.
I would like to calculate an adjusted mean for each observation that needs to use the N-count for each Group, the sum of Scores for that Group, as well as the means for the Scores of each group. (So, in the example above, the N-count for Group 11 is three, the sum is 1466, and the mean is 488.67, and I would use these numbers only on IDs 2, 6, and 7).
I've been fiddling with plyr, and am able to extract the n-counts and means as follows (accounting for missing Scores and Group values):
new_data <- ddply(main_data, "Group", N = sum(!is.na(Scores)), mean = mean(Scores, na.rm = TRUE).
I'm stuck, though, on how to get the sum of the scores for a particular group, and then how to calculate the adjusted means either within the main_data set or a new dataset. Any help would be appreciated.
Here is the plyr way.
ddply(main_data, .(Group), summarize, N = sum(!is.na(Score)), mean = mean(Score, na.rm = TRUE), total = sum(Score))
Group N mean total
1 10 3 319.3333 958
2 11 3 488.6667 1466
3 12 1 293.0000 293
4 13 2 402.5000 805
5 14 1 283.0000 283
Check out the dplyr package.
main_data %>% group_by(Group) %>% summarize(n = n(), mean = mean(Score, na.rm=TRUE), total = sum(Score))
Source: local data frame [5 x 4]
Group n mean total
1 10 3 319.3333 958
2 11 3 488.6667 1466
3 12 1 293.0000 293
4 13 2 402.5000 805
5 14 1 283.0000 283