Calculate rowwise maximum from columns that have changing names - r

I have the following objects:
s1 = "1_1_1_1_1"
s2 = "2_1_1_1_1"
s3 = "3_1_1_1_1"
Please note that the value of s1, s2, s3 can change in another example.
I then have the follwoing data frame:
set.seed(666)
df = data.frame(draw = c(1,2,3,4,1,2,3,4,1,2,3,4),
resp = c(1,1,1,1,2,2,2,2,3,3,3,3),
"1_1_1_1_1" = runif(12),
"2_1_1_1_1" = runif(12),
"3_1_1_1_1" = runif(12)).
Please note that the column names of may data frame will change based on the values of s1,s2,s3.
I now want to achieve the following:
I want to find out which of last three columns in df has the highest value and store it as a value in a new column (values are supposed to be either of 1,2 or 3, depending on if the highest value is the first, second or third of these variables).
Now that I know which value is the highest per row, I want to group/summarize the result by the column resp and count how often my max value is 1, 2 or 3.
So the outcome from 1. should be:
draw resp 1_1_1_1_1 2_1_1_1_1 3_1_1_1_1 max
1 1 0.774 0.095 0.806 3
2 1 0.197 0.142 0.266 3
...
And the outcome from 2. is supposed to be:
resp first_max second_max third_max
1 1 1 2
2 2 1 1
3 1 2 1
My problem is that tidyverse's rowwise function is deprecated and that I don't know how I can dynamically address columns in a tidyverse pipe by column names which a re stored externally (here in s1, s2, s3). One last note: I might be overcomplicating things by trying to go by the column names, when, in fact, the positions of the columns that I'm interested in are always at column position 3:5.

Here is one way to get what you want. For a sligthly different format, you can use count rather than table but this matches your expected output. Hope this helps!!
library(dplyr)
df %>%
mutate(max_val = max.col(select(., starts_with("X")))) %>%
select(resp, max_val) %>%
table()
max_val
resp 1 2 3
1 1 1 2
2 2 1 1
3 1 2 1
Or, you could do this:
df %>%
mutate(max_val = max.col(.[3:5])) %>%
count(resp, max_val) %>%
mutate(max_val = paste0("max_", max_val)) %>%
spread(value = n, key = max_val)
resp max_1 max_2 max_3
<dbl> <int> <int> <int>
1 1 1 1 2
2 2 2 1 1
3 3 1 2 1

calculate max using pmap(row-wise iteration)
max_cols <- pmap_dbl(unname(df),function(x,y,...){
vals <- unlist(list(...))
return(which(vals == max(vals)))
})
result <- df %>% add_column(max = max_cols)
> result
draw resp X1_1_1_1_1 X2_1_1_1_1 X3_1_1_1_1 max
1 1 1 0.4551478 0.70061232 0.618439890 2
2 2 1 0.3667764 0.26670969 0.024742605 1
3 3 1 0.6806912 0.03233215 0.004014758 1
4 4 1 0.9117449 0.42926492 0.885247456 1
5 1 2 0.1886954 0.34189707 0.985054492 3
6 2 2 0.5569398 0.78043504 0.100714130 2
7 3 2 0.9791164 0.92823982 0.676584495 1
8 4 2 0.9174654 0.74627116 0.485582287 1
9 1 3 0.3681890 0.69622331 0.672346875 2
10 2 3 0.5510356 0.99651637 0.482430518 2
11 3 3 0.4283281 0.12832611 0.018095649 1
12 4 3 0.6168436 0.64381995 0.655178701 3
Reshape the data frame.
reshape2::dcast(result,resp~max,fun.aggregate = length,value.var = "max")
resp 1 2 3
1 1 1 1 2
2 2 2 1 1
3 3 1 2 1

Related

Finding the maximum value of a variable

I would like to find the maximum value of a variable (column) and then retain this value (the maximum value) and all values below it. Along with these values, I would like to retain the corresponding values from all other variables (columns) within the data frame. I want to exclude all values above this point from the data frame, for all variables within it. Included is the script for an example data frame (df), and an expected data frame (df2) i.e. what I am trying to achieve. I would be so grateful for some script to do this.
Ba <- c(1,1,1,2,2)
Sr <- c(1,1,1,2,2)
Mn <- c(1,1,2,1,1)
df <- data.frame(Ba, Sr, Mn)
df
# Ba Sr Mn
# 1 1 1 1
# 2 1 1 1
# 3 1 1 2
# 4 2 2 1
# 5 2 2 1
Showing 1 to 5 of 5 entries, 3 total columns
This is what I want to achieve in R:
Ba2 <- c(1,2,2)
Sr2 <- c(1,2,2)
Mn2 <- c(2,1,1)
df2 <- data.frame(Ba2, Sr2, Mn2)
df2
# Ba2 Sr2 Mn2
# 1 1 1 2
# 2 2 2 1
# 3 2 2 1
Showing 1 to 3 of 3 entries, 3 total columns
You can subset df with the sequence from min to nrow(df) of which.max per column:
df[min(sapply(df, which.max)):nrow(df),]
# Ba Sr Mn
#3 1 1 2
#4 2 2 1
#5 2 2 1
Does this work:
df[max(apply(df, 1, which.max)):nrow(df),]
Ba Sr Mn
3 1 1 2
4 2 2 1
5 2 2 1
Using cummax
library(dplyr)
library(purrr)
df %>%
filter(cummax(invoke(pmax, cur_data())) == max(cur_data()))
Ba Sr Mn
1 1 1 2
2 2 2 1
3 2 2 1

Sort across rows to obtain three largest values

There is a injury score called ISS score
I have a table of injury data in rows according to pt ID.
I would like to obtain the top three values for the 6 injury columns.
Column values range from 0-5.
pt_id head face abdo pelvis Extremity External
1 4 0 0 1 0 3
2 3 3 5 0 3 2
3 0 0 2 1 1 1
4 2 0 0 0 0 1
5 5 0 0 2 0 1
My output for the above example would be
pt-id n1 n2 n3
1 4 3 1
2 5 3 3
3 2 1 1
4 2 1 0
5 5 2 1
values can be in a list or in new columns as calculating the score is simple from that point on.
I had thought that I would be able to create a list for the 6 injury columns and then apply a sort to each list taking the top three values. My code for that was:
ais$ais_list <- setNames(split(ais[,2:7], seq(nrow(ais))), rownames(ais))
But I struggled to apply the sort to the lists within the data frame as unfortunately some of the data in my data set includes NA values
We could use apply row-wise and sort the dataframe and take only first three values in each row.
cbind(df[1], t(apply(df[-1], 1, sort, decreasing = TRUE)[1:3, ]))
# pt_id 1 2 3
#1 1 4 3 1
#2 2 5 3 3
#3 3 2 1 1
#4 4 2 1 0
#5 5 5 2 1
As some values may contain NA it is better we apply sort using anonymous function and then take take top 3 values using head.
cbind(df[1], t(apply(df[-1], 1, function(x) head(sort(x, decreasing = TRUE), 3))))
A tidyverse option is to first gather the data, arrange it in descending order and for every row select only first three values. We then replace the injury column with the column names which we want and finally spread the data back to wide format.
library(tidyverse)
df %>%
gather(injury, value, -pt_id) %>%
arrange(desc(value)) %>%
group_by(pt_id) %>%
slice(1:3) %>%
mutate(injury = 1:3) %>%
spread(injury, value)
# pt_id `1` `2` `3`
# <int> <int> <int> <int>
#1 1 4 3 1
#2 2 5 3 3
#3 3 2 1 1
#4 4 2 1 0
#5 5 5 2 1

how to subset a data frame up until a point R

i want to subset a data frame and take all observations for each id until the first observation that didn't meet my condition. Something like this:
goodDaysAfterTreatMent <- subset(Patientdays, treatmentDate < date & goodThings > badThings)
Except that this returns all observations that meet the condition. I want something that stops with the first observation that didn't meet the condition, moves on to the next id, and returns all observations for this id that meets the condition, and so on.
the only way i can see is to use a lot of loops but loops and that's usually not a god thing.
Hope you guys have an idea
Assume that your condition is to return rows where v < 5 :
# example dataset
df = data.frame(id = c(1,1,1,1,2,2,2,2,3,3,3),
v = c(2,4,3,5,4,5,6,7,5,4,1))
df
# id v
# 1 1 2
# 2 1 4
# 3 1 3
# 4 1 5
# 5 2 4
# 6 2 5
# 7 2 6
# 8 2 7
# 9 3 5
# 10 3 4
# 11 3 1
library(tidyverse)
df %>%
group_by(id) %>% # for each id
mutate(flag = cumsum(ifelse(v < 5, 1, NA))) %>% # check if v < 5 and fill with NA all rows when condition is FALSE and after that
filter(!is.na(flag)) %>% # keep only rows with no NA flags
ungroup() %>% # forget the grouping
select(-flag) # remove flag column
# # A tibble: 4 x 2
# id v
# <dbl> <dbl>
# 1 1 2
# 2 1 4
# 3 1 3
# 4 2 4
Easy way:
Find First FALSE by (min(which(condition == F)):
Patientdays<-cbind.data.frame(treatmentDate=c(1:5,4,6:10),date=c(2:5,3,6:10,10),goodThings=c(1:11),badThings=c(0:10))
attach(Patientdays)# Just due to ease of use (optional)
condition<-treatmentDate < date & goodThings > badThings
Patientdays[1:(min(which(condition == F))-1),]
Edit: Adding result.
treatmentDate date goodThings badThings
1 1 2 1 0
2 2 3 2 1
3 3 4 3 2
4 4 5 4 3

create variable conditionally by group in R (write function)

I want to create a variable by group conditioned on existing variable on individual level. Each individual has a outlier variable 1, 2, 3. I want to create a new variable by group so that the new var = 2 whenever there is at least one individual in that group whose outlier variable = 2; and the new var = 3 whenever there is at least one individual in that group whose outlier variable = 3.
The data looks like this
grpid id outlier
1 1 1
1 2 1
1 3 2
2 4 1
2 5 3
2 6 1
3 7 1
3 8 1
3 9 1
Ideal output like this
grpid id outlier goutlier
1 1 1 2
1 2 1 2
1 3 2 2
2 4 1 3
2 5 3 3
2 6 1 3
3 7 1 1
3 8 1 1
3 9 1 1
Any suggestions?
Thanks!
It is easy with dplyr
library(dplyr)
df <- read.table(header = TRUE,sep = ",",
text = "grpid,id,outlier
1,1,1
1,2,1
1,3,2
2,4,1
2,5,3
2,6,1
3,7,1
3,8,1
3,9,1")
df %>% group_by(grpid) %>% mutate(goutlier = max(outlier))

Summary of values across rows and columns in R

I have a dataset that looks like:
Group A B C D
XYZ 4 Na 1 3
XYZ Na 2 2 1
DEF 4 3 2 1
DEF 3 3 1 1
PQR 1 Na Na 1
PQR 3 2 2 4
I want the summary of this dataset across rows and columns for the count of each value as below:
Group 4 3 2 1
XYZ 1 1 2 2
DEF 1 3 1 3
PQR 1 1 2 2
The count of 4 in the dataset for group XYZ across all rows and columns is 1, for 2 and 1 its 2, for 3 its 1. I can do this by creating 4 new columns 4,3,2,1 and getting the count row wise and then column wise, but this is not efficient and scalable. I am sure there is a better way to get this done.
Using reshape2 package we can melt and dcast as follows,
library(reshape2)
dcast(na.omit(melt(df, id.vars = 'Group')), Group ~ value, fun.aggregate = length)
# Group 1 2 3 4
#1 DEF 3 1 3 1
#2 PQR 2 2 1 1
#3 XYZ 2 2 1 1
This uses no packages and is just one line. Here DF$Group[row(DF[-1])] is a Group labels vector such that each element corresponds to the unravelled numeric vector unlist(DF[-1]).
table(DF$Group[row(DF[-1])], unlist(DF[-1]))
giving:
1 2 3 4
DEF 3 1 3 1
PQR 2 2 1 1
XYZ 2 2 1 1
If the order of rows and columns shown in the question is important then to we can create factors from each of the two table arguments with the factor levels being defined in the orders desired. In this case we use the following line instead of the line of code above:
table(Group = factor(DF$Group[row(DF[-1])], unique(DF$Group)), factor(unlist(DF[-1]), 4:1))
giving:
Group 4 3 2 1
XYZ 1 1 2 2
DEF 1 3 1 3
PQR 1 1 2 2
The above produces an object of class "table". This is a particularly suitable class for tabulated frequencies. For example, once in this form ftable can be used to easily rearrange it further as in ftable(tab, row.vars = 2) or ftable(tab, row.vars = 1:2) where tab is the above computed table.
If a data.frame were preferred then convert it like this:
cbind(Group = rownames(tab), as.data.frame.matrix(tab))
The input data.frame DF is defined reproducibly in Note 2 at the end.
Alternatives
Although the above seems the most direct here are some other alternatives that also use no packages:
1) by For each set of rows having the same Group value the anonymous function creates a data.frame identifying the Group, converting the columns other than the first to a factor with the indicated levels and running table to get the counts. The "by" list that is returned is sorted back to the original order and we rbind everything back together.
do.call("rbind",
by(DF, DF$Group, function(x) {
data.frame(Group = x[1,1],
as.list(table(factor(unlist(x[, -1]), levels = 4:1))),
check.names = FALSE)
})[unique(DF$Group)])
giving:
Group 4 3 2 1
XYZ XYZ 1 1 2 2
DEF DEF 1 3 1 3
PQR PQR 1 1 2 2
1a) This slightly shorter variation would also work. It returns a matrix identifying the groups using row names.
kount <- function(x) table(factor(unlist(x), levels = 4:1))
m <- do.call("rbind", by(DF[, -1], DF$Group, kount)[unique(DF$Group)])
giving:
> m
4 3 2 1
XYZ 1 1 2 2
DEF 1 3 1 3
PQR 1 1 2 2
2) outer
gps <- unique(DF$Group)
levs <- 4:1
kount2 <- function(g, lv) sum(subset(DF, Group == g)[-1] == lv, na.rm = TRUE)
m <- outer(gps, levs, Vectorize(kount2))
dimnames(m) <- list(gps, levs))
giving this matrix:
> m
4 3 2 1
XYZ 1 1 2 2
DEF 1 3 1 3
PQR 1 1 2 2
3) sapply
kount3 <- function(g) table(factor(unlist(DF[DF$Group == g, -1]), levels = 4:1))
gps <- as.character(unique(DF$Group))
do.call("rbind", sapply(gps, kount3, simplify = FALSE))
giving:
4 3 2 1
XYZ 1 1 2 2
DEF 1 3 1 3
PQR 1 1 2 2
4) aggregate
aggregate(1:nrow(DF), DF["Group"], function(ix)
table(factor(unlist(DF[ix, -1]), levels = 4:1)))[unique(DF$Group), ]
giving:
Group x.4 x.3 x.2 x.1
3 XYZ 1 1 2 2
1 DEF 1 3 1 3
2 PQR 1 1 2 2
5) tapply
do.call("rbind", tapply(1:nrow(DF), DF$Group, function(ix)
table(factor(unlist(DF[ix, -1]), levels = 4:1))))[unique(DF$Group), ]
6) reshape
with(reshape(DF, dir = "long", varying = list(2:5)),
table(factor(Group, unique(DF$Group)), factor(A, 4:1)))
giving:
4 3 2 1
XYZ 1 1 2 2
DEF 1 3 1 3
PQR 1 1 2 2
Note 1: (1a), (2), (3), (5) and (6) produce a matrix or table result with groups as row names. If you prefer a data frame with Groups as a column then supposing that m is the matrix, add this:
data.frame(Group = rownames(m), m, check.names = FALSE)
Note 2: The input DF in reproducible form is:
Lines <- "Group A B C D
XYZ 4 Na 1 3
XYZ Na 2 2 1
DEF 4 3 2 1
DEF 3 3 1 1
PQR 1 Na Na 1
PQR 3 2 2 4"
DF <- read.table(text = Lines, header = TRUE, na.strings = "Na")
We can use dplyr/tidyr
library(dplyr)
library(tidyr)
df1 %>%
mutate_each(funs(replace(., .=="Na", NA))) %>%
gather(Var, Val, A:D, na.rm=TRUE) %>%
group_by(Group, Val) %>%
tally() %>%
spread(Val, n)
# Group `1` `2` `3` `4`
#* <chr> <int> <int> <int> <int>
#1 DEF 3 1 3 1
#2 PQR 2 2 1 1
#3 XYZ 2 2 1 1

Resources