I have data like this (derived using the table() function):
dat <- read.table(text = "responses freq percent
A 9 25.7
B 13 37.1
C 10 28.6
D 3 8.6", header = TRUE)
dat
responses freq percent
A 9 25.7
B 13 37.1
C 10 28.6
D 3 8.6
All I want are row totals, so to create a new row at the bottom that says Total and then in column freq it will show 35 and in percent it will show 100. I am unable to find a solution. colSums doesn't work because of the first column which is a string.
One option is converting to 'matrix' and using addmargins to get the column sum as a separate row at the bottom. But, this will be a matrix.
m1 <- as.matrix(df1[-1])
rownames(m1) <- df1[,1]
res <- addmargins(m1, 1)
res
# freq percent
#A 9 25.7
#B 13 37.1
#C 10 28.6
#D 3 8.6
#Sum 35 100.0
If you want to convert to data.frame
data.frame(responses=rownames(res), res)
Another option would be getting the sum with colSums for the numeric columns (df1[-1]) (I think here is where the OP got into trouble, ie. applying the colSums on the entire dataset instead of subsetting), create a new data.frame with the responses column and rbind with the original dataset.
rbind(df1, data.frame(responses='Total', as.list(colSums(df1[-1]))))
# responses freq percent
#1 A 9 25.7
#2 B 13 37.1
#3 C 10 28.6
#4 D 3 8.6
#5 Total 35 100.0
data
df1 <- structure(list(responses = c("A", "B", "C", "D"), freq = c(9L,
13L, 10L, 3L), percent = c(25.7, 37.1, 28.6, 8.6)),
.Names = c("responses", "freq", "percent"), class = "data.frame",
row.names = c(NA, -4L))
This might be relevant, using SciencesPo package, see this example:
library(SciencesPo)
tab(mtcars,gear,cyl)
#output
=================================
cyl
--------------------
gear 4 6 8 Total
---------------------------------
3 1 2 12 15
6.7% 13% 80% 100%
4 8 4 0 12
66.7% 33% 0% 100%
5 2 1 2 5
40.0% 20% 40% 100%
---------------------------------
Total 11 7 14 32
34.4% 22% 44% 100%
=================================
Chi-Square Test for Independence
Number of cases in table: 32
Number of factors: 2
Test for independence of all factors:
Chisq = 18.036, df = 4, p-value = 0.001214
Chi-squared approximation may be incorrect
X^2 df P(> X^2)
Likelihood Ratio 23.260 4 0.00011233
Pearson 18.036 4 0.00121407
Phi-Coefficient : NA
Contingency Coeff.: 0.6
Cramer's V : 0.531
#akrun I posted it but you already did the same. Correct me if I'm wrong, I think we can just need this without creating a new data frame or using as.list.
rbind(df1, c("Total", colSums(df1[-1])))
Output:
responses freq percent
1 A 9 25.7
2 B 13 37.1
3 C 10 28.6
4 D 3 8.6
5 Total 35 100
sqldf Classes of the data frame are preserved.
library(sqldf)
sqldf("SELECT * FROM df1
UNION
SELECT 'Total', SUM(freq) AS freq, SUM(percent) AS percent FROM df1")
Or, alternatively you can use margin.table and rbind function within R-base. Two lines and voila...
PS: The lines here are longer as I am recreating the data, but you know what I mean :-)
Data
df1 <- matrix(c(9,25.7,13,37.1,10,28.6,3,8.6),ncol=2,byrow=TRUE)
colnames(df1) <- c("freq","percent")
rownames(df1) <- c("A","B","C","D")
Creating Total Calculation
Total <- margin.table(df1,2)
Combining Total Calculation to Original Data
df2 <- rbind(df,Total)
df2
Inelegant but it gets the job done, please provide reproducible data frames so we don't have to build them first:
data = data.frame(letters[1:4], c(9,13,10,3), c(25.7,37.1, 28.6, 8.6))
colnames(data) = c("X","Y","Z")
data = rbind(data[,1:3], matrix(c("Sum",lapply(data[,2:3], sum)), nrow = 1)[,1:3])
library(janitor)
dat %>%
adorn_totals("row")
responses freq percent
A 9 25.7
B 13 37.1
C 10 28.6
D 3 8.6
Total 35 100.0
Related
My data have more than 50 variables and the same values distributed across them. I need to table each value (total values more than 1000 ) with its frequency across the 40 variables.
I need to create a table for each values across all variables for example F447 5 A257 4 G1229 5 C245 2
You can use table to get the frequency across the variables, then we can convert to a dataframe and make the variable names a column, and finally arrange by frequency. If you want to put in descending order, then we could do arrange(desc(frequency)).
library(dplyr)
data.frame(frequency = unclass(table(unlist(df)))) %>%
tibble::rownames_to_column("variable") %>%
arrange(frequency)
Output
variable frequency
1 C245 1
2 A257 2
3 F447 3
4 G1229 3
Or with base R, you could do:
results <- data.frame(frequency = unclass(table(unlist(df))))
results$variable <- row.names(results)
row.names(results) <- NULL
results <- results[order(results$frequency),c(2,1)]
Another option if you would like additional summary and visualization of the frequency, then epiDisplay is a good option.
library(epiDisplay)
tab1(unlist(df), sort.group = "decreasing", cum.percent = TRUE)
Output
Frequency Percent Cum. percent
G1229 3 33.3 33.3
F447 3 33.3 66.7
A257 2 22.2 88.9
C245 1 11.1 100.0
Total 9 100.0 100.0
Data
df <- structure(list(Var1 = c("F447", "A257", "G1229"), Var2 = c("G1229",
"F447", "A257"), Var3 = c("C245", "F447", "G1229")), class = "data.frame", row.names = c(NA,
-3L))
I used the freq function of frequency package to get frequency percent on my dataset$MoriskyAdherence, then R gives me percent values with rounding. I need more decimal places.
MoriskyAdherence=dataset$MoriskyAdherence
freq(MoriskyAdherence)
The result is:
The Percent values are 35.5, 41.3,23.8. The sum of them is 100.1.
The exact amounts should be 35.5, 41.25, 23.75.
What should I do?
I used sprintf, as.data.frame,formatC, and some other function to deal with it.But...
The function freq returns a character data frame, and has no option to adjust the number of decimal places. However, it is easy to recreate the table however you want it. For example, I have written this function, which will give you the same result but with two decimal places instead of one:
freq2 <- function(data_frame)
{
df <- frequency::freq(data_frame)
lapply(df, function(x)
{
n <- suppressWarnings(as.numeric(x$Freq))
sum_all <- as.numeric(x$Freq[nrow(x)])
raw_percent <- suppressWarnings(100 * n / sum_all)
t_row <- grep("Total", x[,2])[1]
valid_percent <- suppressWarnings(100*n / as.numeric(x$Freq[t_row]))
x$Percent <- format(round(raw_percent, 2), nsmall = 2)
x$'Valid Percent' <- format(round(valid_percent, 2), nsmall = 2)
x$'Cumulative Percent' <- format(round(cumsum(valid_percent), 2), nsmall = 2)
x$'Cumulative Percent'[t_row:nrow(x)] <- ""
x$'Valid Percent'[(t_row + 1):nrow(x)] <- ""
return(x)
})
}
Now instead of
freq(MoriskyAdherence)
#> Building tables
#> |===========================================================================| 100%
#> $`x:`
#> x label Freq Percent Valid Percent Cumulative Percent
#> 2 Valid High Adherence 56 35.0 35.0 35.0
#> 3 Low Adherence 66 41.3 41.3 76.3
#> 4 Medium Adherence 38 23.8 23.8 100.0
#> 41 Total 160 100.0 100.0
#> 1 Missing <blank> 0 0.0
#> 5 <NA> 0 0.0
#> 7 Total 160 100.0
you can do
freq2(MoriskyAdherence)
#> Building tables
#> |===========================================================================| 100%
#> $`x:`
#> x label Freq Percent Valid Percent Cumulative Percent
#> 2 Valid High Adherence 56 35.00 35.00 35.00
#> 3 Low Adherence 66 41.25 41.25 76.25
#> 4 Medium Adherence 38 23.75 23.75 100.00
#> 41 Total 160 100.00 100.00
#> 1 Missing <blank> 0 0.00
#> 5 <NA> 0 0.00
#> 7 Total 160 100.00
which is exactly what you were looking for.
Two (potential) solutions:
Solution #1:
Make changes inside the function freq. This can be done by retrieving the function's code with the command freq (without round brackets), or by retrieving the code, with comments, from https://rdrr.io/github/wilcoxa/frequencies/src/R/freq.R.
My hunch is that to obtain more decimals, changes must be implemented at this point in the code:
# create a list of frequencies
message("Building tables")
all_freqs <- lapply_pb(names(x), function(y, x1 = as.data.frame(x), maxrow1 = maxrow, trim1 = trim){
makefreqs(x1, y, maxrow1, trim1)
})
Solution #2:
If you're only after percentages with more decimals, you can use aggregate. Let's suppose your data has this structure: a dataframe with two variables, one numeric, one a factor by which you want to group:
set.seed(123)
Var1 <- sample(LETTERS[1:4], 10, replace = T)
Var2 <- sample(10:100, 10, replace = T)
df <- data.frame(Var1, Var2)
Var1 Var2
1 B 97
2 D 51
3 B 71
4 D 62
5 D 19
6 A 91
7 C 32
8 D 13
9 C 39
10 B 96
Then to obtain your percentages by factor, you would use aggregatethus:
aggregate(Var2 ~ Var1, data = df, function(x) sum(x)/sum(Var2)*100)
Var1 Var2
1 A 15.93695
2 B 46.23468
3 C 12.43433
4 D 25.39405
You can control the number of decimals by using round:
aggregate(Var2 ~ Var1, data = df, function(x) round(sum(x)/sum(Var2)*100,3))
I have a large data set of samples that belong to different groups and differ in the area covered. The structure of the data set is simplified below. I now would like to create pooled samples (Subgroups) for each Group where the area covered by each Subgroup equates to a specified area (e.g. 20). Samples should be allocated randomly and without replacement to each Subgroup and the number of the Subgroup should be listed in a new column at the end of the data frame.
SampleID Group Area Subgroup
1 A 1.5 1
2 A 3.8 2
3 A 6 4
4 A 1.9 1
5 A 1.5 3
6 A 4.1 1
7 A 3.7 1
8 A 4.5 3
...
300 B 1.2 1
301 B 3.8 1
302 B 4.1 4
303 B 2.6 3
304 B 3.1 5
305 B 3.5 3
306 B 2.1 2
...
2000 S 2.7 5
...
I am currently using the ‘cumsum’ command to create the Subgroups, using the code below.
dat <- read.table("Pooling_Test.txt", header = TRUE, sep = "\t")
dat$CumArea <- cumsum(dat$Area)
dat$Diff_CumArea <- c(0, head(cumsum(dat$Area), -1))
dat$Sample_Int_1 <- "0"
dat$Sample_End <- "0"
current.sum <- 0
for (c in 1:nrow(dat)) {
current.sum <- current.sum + dat[c, "Area"]
dat[c, "Diff_CumArea"] <- current.sum
if (current.sum >= 20) {
dat[c, "Sample_Int_1"] <- "1"
dat[c, "Sample_End"] <- "End"
current.sum <- 0
dat$Sample_Int_2 <- cumsum(dat$Sample_Int_1)+1
dat$Sample_Final <- dat$Sample_Int_2
for (d in 1:nrow(dat)) {
if (dat$Sample_End[d] == 'End')
dat$Subgroup[d] <- dat$Sample_Int_2[d]-1
else 0 }
}}
write.csv(dat, file = 'Pooling_Test_Output.csv', row.names = FALSE)
The resultant data frame shows what I want (see below). However, there are a couple of steps I would like to improve. First, I have problems including a command for choosing samples randomly from each Group, so I currently randomise the order of samples before loading the data frame into R. Secondly, in the output table the Subgroups are numbered consecutively, but I would like to start the Subgroup numbering with 1 for each new Group. Has anybody any advice on how to achieve this?
SampleID Group CumArea Subgroups
1 A 1.5 1
77 A 4.6 1
6 A 9.3 1
43 A 16.4 1
17 A 19.5 1
67 A 2.1 2
4 A 4.3 2
32 A 8.9 2
...
300 B 4.5 10
257 B 6.8 10
397 B 10.6 10
344 B 14.5 10
367 B 16.7 10
303 B 20.1 10
306 B 1.5 11
...
A few functions in the dplyr package make this fairly straightforward. You can use slice to randomize the data, group_by to perform computations at the group level, and mutate to create new variables. If you chain the functions together with the %>% operator, I believe the solution would look something like this, assuming that you want groups that add up to 20.
install.packages("dplyr") #If you haven't used dplyr before
library(dplyr)
dat %>%
group_by(Group) %>%
slice(sample(1:n())) %>%
mutate(CumArea = cumsum(Area), SubGroup = ceiling(CumArea / 20))
This question already has answers here:
Aggregate a dataframe on a given column and display another column
(8 answers)
Closed 5 years ago.
I've got data somewhat like this (of course with many more rows):
Age Work Zone SomeNumber
26 1 2.61
32 4 8.42
41 2 9.71
45 2 4.14
64 3 6.04
56 1 5.28
37 4 7.93
I want to get the maximum SomeNumber for each zone at or below each age. SomeNumber increases with age, so I expect that the highest SomeNumber in Zone 2 by an under-32-y/o is by a guy who's age 31, but it could in fact be a guy age 27.
To do this I've written a nested for loop:
for(i in zonelist){
temp = data[data$zone==i,]
for(j in 1:max(data$age)){
temp.lessequal=c(temp.lessequal,max((temp[temp$Age<=j,])$SomeNumber))
}
#plot temp.lessequal or save it at this point
}
which of course is tremendously slow. How can I do this faster? I've looked at the order function to sort by two columns at once, but that doesn't let me take the max of each group.
Data:
df1 <- read.table(text='Age Work_Zone SomeNumber
26 1 2.61
32 4 8.42
41 2 9.71
45 2 4.14
64 3 6.04
56 1 5.28
37 4 7.93',
header = TRUE)
Code:
df2 <- with( df1, df1[ Age <= 32, ] ) # extract rows with Age <= 32
# get maximum of someNumber by aggregating with work_zone and then merging with df1 to combine the age column
merge(aggregate(SomeNumber ~ Work_Zone, data = df2, max), df2)
# Work_Zone SomeNumber Age
# 1 1 2.61 26
# 2 4 8.42 32
It seems OP is looking for max value based on <= condition on a particular column (Age).
The use of sqldf comes very handy in such cases in order to explain the logic. One solution could be:
# Data
df <- read.table(text = "Age Work_Zone SomeNumber
26 1 2.61
32 4 8.42
41 2 9.71
45 2 4.14
64 3 6.04
56 1 5.28
37 4 7.93", header = T, stringsAsFactors = F)
library(sqldf)
df3 <- sqldf("select df1.Work_Zone, df1.Age, max(df2.SomeNumber) from df df1
inner join df df2 on df1.Work_Zone = df2.Work_Zone
WHERE df2.Age <= df1.Age
GROUP BY df1.Work_Zone, df1.Age")
# Result:
# Work_Zone Age max(df2.SomeNumber)
# 1 1 26 2.61
# 2 1 56 5.28
# 3 2 41 9.71
# 4 2 45 9.71
# 5 3 64 6.04
# 6 4 32 8.42
# 7 4 37 8.42
Using the library data.table you can select the rows which are less than required age, then output the max(somenumber) and their respective age for each Workzone ie group by workzone.
library(data.table)
setDT(df1)[Age<=32,.(max(SomeNumber),Age),by=Work_Zone]
Work_Zone V1 Age
1: 1 2.61 26
2: 4 8.42 32
I am new-ish to R and have what should be a simple enough question to answer; any help would be greatly appreciated.
The situation is I have a tab delimited data matrix (data matrix.txt) like below with group information included on the last column.
sampleA sampleB sampleC Group
obs11 23.2 52.5 -86.3 1
obs12 -86.3 32.5 -84.7 1
obs41 -76.2 35.8 -16.3 2
obs74 23.2 32.5 -86.8 2
obs82 -86.2 52.8 -83.2 3
obs38 -36.2 59.5 -74.3 3
I would like to replace the values of each of the groups with the average value for that group
How can a group average rather than a row or column average be calculated in R?
And how can I use this value to replace original values? Is the replace() function useable in this situation or is that only for replacing two known values?
Thanks in advance
The package ddply should do the trick.
dat <- as.data.frame(matrix(runif(80),ncol=4))
dat$group <- sample(letters[1:4],size=20,replace=T)
head(dat)
library(plyr)
ddply(.data = dat, .variables =.(group), colwise(mean))
Result
group V1 V2 V3 V4
1 a 0.4741673 0.7669612 0.5043857 0.5039938
2 b 0.3648794 0.5776748 0.4033758 0.5748613
3 c 0.1450466 0.5399372 0.2440170 0.5124578
4 d 0.4249183 0.3252093 0.5467726 0.4416924