I have two tables as follow:
a<-data.frame("Task"=c("A","B","C","D","E"),"FC"=c(100,NA,300,500,400),"FH"=c(NA,100,200,NA,300))
Task FC FH
1 A 100 NA
2 B NA 100
3 C 300 200
4 D 500 NA
5 E 400 300
b<-data.frame("Task"=c("A","B","C"),FC=c(10,20,30),FH=c(20,10,30))
Task FC FH
1 A 10 20
2 B 20 10
3 C 30 30
I want the output with sum of the corresponding values from table a and table b but if the value is NA the output is NA
The output is like this:
Task FC FH
1 A 110 NA
2 B NA 110
3 C 330 230
With base R, you can try:
data <- rbind(a[a$Task %in% b$Task, ], b)
aggregate(. ~ Task, sum, na.action = "na.pass", data = data)
Task FC FH
1 A 110 NA
2 B NA 110
3 C 330 230
Or the same with dplyr:
bind_rows(a[a$Task %in% b$Task, ], b) %>%
group_by(Task) %>%
summarise_all(sum)
Task FC FH
<chr> <dbl> <dbl>
1 A 110 NA
2 B NA 110
3 C 330 230
Or to have it even more dplyr-like:
bind_rows(a, b, .id = "ID") %>%
group_by(Task) %>%
filter(n_distinct(ID) != 1) %>%
select(-ID) %>%
summarise_all(sum)
Related
I've a dataset similar to this (clearly much bigger):
ID <- c(1,2,3,4,5,6)
MASS <- c(324,162,508,675,670,832)
DIFF <- c("2","1","5","0","3&6","5")
d <- data.frame(ID, MASS, DIFF)
ID MASS DIFF
1 1 324 2
2 2 162 1
3 3 508 5
4 4 675 0
5 5 670 3&6
6 6 832 5
Is there any way in R to set up up a script that would:
read the values reported in the column DIFF (not considering & or 0)
find the same values in the column ID
paste the corresponding values present in the next cell (belonging to the column MASS), into a new column (one value per cells) next to the column DIFF that reports the IDs- if more than one values are reported in the column DIFF, make new columns (MASS1, MASS2, MASS3...)
The aim would be to obtain something like what is reported here below, I hope this can clarify my clumsy description of the problem:
ID MASS DIFF MASS1 MASS2
1 1 324 2 162 NA
2 2 162 1 324 NA
3 3 508 5 670 NA
4 4 675 0 NA NA
5 5 670 3&6 508 832
6 6 832 5 670 NA
Many thanks for any advice
This feels pretty hacky and overly complicated, but it works. Maybe someone else has a more efficient method:
library(dplyr)
library(tidyr)
library(purrr)
d |>
separate_rows(DIFF, convert = TRUE) |>
left_join(d, c("DIFF" = "ID")) |>
select(-DIFF.y) |>
group_by(ID) |>
mutate(DIFF = paste(DIFF, collapse = "&")) |>
ungroup() |>
rename(MASS = MASS.x) |>
group_split(ID) |>
map(~ .x |>
mutate(temp = row_number()) |>
pivot_wider(values_from = MASS.y, names_from = temp, names_glue = "MASS{temp}")) |>
bind_rows()
# A tibble: 6 × 5
ID MASS DIFF MASS1 MASS2
<dbl> <dbl> <chr> <dbl> <dbl>
1 1 324 2 162 NA
2 2 162 1 324 NA
3 3 508 5 670 NA
4 4 675 0 NA NA
5 5 670 3&6 508 832
6 6 832 5 670 NA
I looked here and elsewhere, but I cannot find something that does exactly what I'm looking to accomplish using R.
I have data similar to below, where col1 is a unique ID, col2 is a group ID variable, col3 is a status code. I need to flag all rows with the same group ID, and where any of those rows have a specific status code, X in this case, as == 1, otherwise 0.
ID GroupID Status Flag
1 100 A 1
2 100 X 1
3 102 A 0
4 102 B 0
5 103 B 1
6 103 X 1
7 104 X 1
8 104 X 1
9 105 A 0
10 105 C 0
I have tried writing some ifelse where groupID == groupID and status == X then 1 else 0, but that doesn't work. The pattern of Status is random. In this example, the GroupID is exclusively pairs, but I don't want to assume that in the code, b/c I have other instance where there are 3 or more rows in a GroupID.
It would be helpful if this were open ended IE I could add other conditions if necessary, like, for each matching group ID, where Status == X, and other or other, etc.
Thank you !
Group-based operations like this are easy to do with the dplyr package.
The data:
library(dplyr)
txt <- 'ID GroupID Status
1 100 A
2 100 X
3 102 A
4 102 B
5 103 B
6 103 X
7 104 X
8 104 X
9 105 A
10 105 C '
df <- read.table(text = txt, header = T)
Once we have the data frame, we establish dplyr groups with the group_by function. The mutate command will then be applied per each group, creating a new column entry for each row.
df.new <- df %>%
group_by(GroupID) %>%
mutate(Flag = as.numeric(any(Status == 'X')))
# A tibble: 10 x 4
# Groups: GroupID [5]
ID GroupID Status Flag
<int> <int> <fct> <dbl>
1 1 100 A 1
2 2 100 X 1
3 3 102 A 0
4 4 102 B 0
5 5 103 B 1
6 6 103 X 1
7 7 104 X 1
8 8 104 X 1
9 9 105 A 0
10 10 105 C 0
From base R
ave(df$Status=='X',df$GroupID,FUN=any)
[1] TRUE TRUE FALSE FALSE TRUE TRUE TRUE TRUE FALSE FALSE
Data.table way:
library(data.table)
setDT(df)
df[ , flag := sum(Status == "X") > 0, by=GroupID]
An alternative using data.table
library(data.table)
dt <- read.table(stringsAsFactors = FALSE,text = "ID GroupID Status
1 100 A
2 100 X
3 102 A
4 102 B
5 103 B
6 103 X
7 104 X
8 104 X
9 105 A
10 105 C", header=T)
setDT(dt)[,.(ID,Status, Flag=ifelse("X"%in% Status,1,0)),by=GroupID]
#returns
GroupID ID Status Flag
1: 100 1 A 1
2: 100 2 X 1
3: 102 3 A 0
4: 102 4 B 0
5: 103 5 B 1
6: 103 6 X 1
7: 104 7 X 1
8: 104 8 X 1
9: 105 9 A 0
10: 105 10 C 0
A base R option with rowsum
i1 <- with(df1, rowsum(+(Status == "X"), group = GroupID) > 0)
transform(df1, Flag = +(GroupID %in% row.names(i1)[i1]))
Or using table
df1$Flag <- +(with(df1, GroupID %in% names(which(table(GroupID,
Status == "X")[,2]> 0))))
I want to build all possible pairs of rows in a dataframe within each level of a categorical variable name and then make the differences of these rows within each level of name for all non-factor variables: row 1 - row 2, row 1 - row 3, …
set.seed(9)
df <- data.frame(
ID = 1:10,
name = as.factor(rep(LETTERS, each = 4)[1:10]),
X1 = sample(1001, 10),
X2 = sample(1001, 10),
bool = sample(c(TRUE, FALSE), 10, replace = TRUE),
fruit = as.factor(sample(c("Apple", "Orange", "Kiwi"), 10, replace = TRUE))
)
This is what the sample looks like:
ID name X1 X2 bool fruit
1 1 A 222 118 FALSE Apple
2 2 A 25 9 TRUE Kiwi
3 3 A 207 883 TRUE Orange
4 4 A 216 301 TRUE Kiwi
5 5 B 443 492 FALSE Apple
6 6 B 134 499 FALSE Kiwi
7 7 B 389 401 TRUE Kiwi
8 8 B 368 972 TRUE Kiwi
9 9 C 665 356 FALSE Apple
10 10 C 985 488 FALSE Kiwi
I want to get a dataframe of 13 rows which looks like :
ID name X1 X2 bool fruit
1 1-2 A 197 109 -1 Apple
2 1-3 A 15 -765 -1 Kiwi
…
Note that the factor fruit should be unchanged. But it is a bonus, I want above all the X1 and X2 to be changed and the factor name to be kept.
I know I may use combn function but I do not see how to do it. I would prefer a solution with the dplyr package and the group_by function.
I've managed to create all differences for consecutives rows with dplyr using
varnotfac <- names(df)[!sapply(df, is.factor )] # remove factorial variable
# but not logical variable
library(dplyr)
diff <- df%>%
group_by(name) %>%
mutate_at(varnotfac, funs(. - lead(.))) %>% #
na.omit()
I could not find out how to keep all variables using filter_if / filter_at so I used select_at. So from #Axeman's answer
set.seed(9)
varnotfac <- names(df)[!sapply(df, is.factor )] # names of non-factorial variables
diff1<- df %>%
group_by(name) %>%
select_at(vars(varnotfac)) %>%
nest() %>%
mutate(data = purrr::map(data, ~as.data.frame(map(.x, ~combn(., 2, base::diff))))) %>%
unnest()
Or with the outer function, it's way faster than combn
set.seed(9)
varnotfac <- names(df)[!sapply(df, is.factor )] # names of non-factorial variables
allpairs <- function(v){
y <- outer(v,v,'-')
z <- y[lower.tri(y)]
return(z)
}
diff2<- df %>%
group_by(name) %>%
select_at(vars(varnotfac)) %>%
nest() %>%
mutate(data = purrr::map(data, ~as.data.frame(map(.x, ~allpairs(.))))) %>%
unnest()
)
One can check that the data.frame obtained are the same with
all.equal(diff1,diff2)
[1] TRUE
My sample looks different...
ID name X1 X2 bool
1 1 A 222 118 FALSE
2 2 A 25 9 TRUE
3 3 A 207 883 TRUE
4 4 A 216 301 TRUE
5 5 B 443 492 FALSE
6 6 B 134 499 FALSE
7 7 B 389 401 TRUE
8 8 B 368 972 TRUE
9 9 C 665 356 FALSE
10 10 C 985 488 FALSE
Using this, and looking here, we can do:
library(dplyr)
library(tidyr)
library(purrr)
df %>%
group_by(name) %>%
nest() %>%
mutate(data = map(data, ~as.data.frame(map(.x, ~as.numeric(dist(.)))))) %>%
unnest()
# A tibble: 13 x 5
name ID X1 X2 bool
<fct> <dbl> <dbl> <dbl> <dbl>
1 A 1 197 109 1
2 A 2 15 765 1
3 A 3 6 183 1
4 A 1 182 874 0
5 A 2 191 292 0
6 A 1 9 582 0
7 B 1 309 7 0
8 B 2 54 91 1
9 B 3 75 480 1
10 B 1 255 98 1
11 B 2 234 473 1
12 B 1 21 571 0
13 C 1 320 132 0
This is unsigned though. Alternatively:
df %>%
group_by(name) %>%
nest() %>%
mutate(data = map(data, ~as.data.frame(map(.x, ~combn(., 2, diff))))) %>%
unnest()
# A tibble: 13 x 5
name ID X1 X2 bool
<fct> <int> <int> <int> <int>
1 A 1 -197 -109 1
2 A 2 -15 765 1
3 A 3 -6 183 1
4 A 1 182 874 0
5 A 2 191 292 0
6 A 1 9 -582 0
7 B 1 -309 7 0
8 B 2 -54 -91 1
9 B 3 -75 480 1
10 B 1 255 -98 1
11 B 2 234 473 1
12 B 1 -21 571 0
13 C 1 320 132 0
Question:
I am using dplyr to do data analysis in R, and I come across the following problem.
My data frame is like this:
item day val
1 A 1 90
2 A 2 100
3 A 3 110
4 A 5 80
5 A 8 70
6 B 1 75
7 B 3 65
The data frame is already arranged in item, day. Now I want to mutate a new column, with each row being the smallest value of the same group AND having the day to be within the next 2 days.
For the example above, I want the resulting data frame to be:
item day val output
1 A 1 90 100 # the smaller of 100 and 110
2 A 2 100 110 # the only value within 2 days
3 A 3 110 80 # the only value within 2 days
4 A 5 80 NA # there is no data within 2 days
5 A 8 70 NA # there is no data within 2 days
6 B 1 75 65 # the only value within 2 days
7 B 3 65 NA # there is no data within 2 days
I understand that I will probably use group_by and mutate, but how to write the inside function in order to achieve my desired result?
Any help is greatly appreciated. Let me know if you need me to clarify anything. Thank you!
Try this:
df %>%
# arrange(item, day) %>% # if not already arranged
# take note of the next two values & corresponding difference in days
group_by(item) %>%
mutate(val.1 = lead(val),
day.1 = lead(day) - day,
val.2 = lead(val, 2),
day.2 = lead(day, 2) - day) %>%
ungroup() %>%
# if the value is associated with a day more than 2 days away, change it to NA
mutate(val.1 = ifelse(day.1 %in% c(1, 2), val.1, NA),
val.2 = ifelse(day.2 %in% c(1, 2), val.2, NA)) %>%
# calculate output normally
group_by(item, day) %>%
mutate(output = min(val.1, val.2, na.rm = TRUE)) %>%
ungroup() %>%
# arrange results
select(item, day, val, output) %>%
mutate(output = ifelse(output == Inf, NA, output)) %>%
arrange(item, day)
# A tibble: 7 x 4
item day val output
<fctr> <int> <int> <dbl>
1 A 1 90 100
2 A 2 100 110
3 A 3 110 80.0
4 A 5 80 NA
5 A 8 70 NA
6 B 1 75 65.0
7 B 3 65 NA
Data:
df <- read.table(text = " item day val
1 A 1 90
2 A 2 100
3 A 3 110
4 A 5 80
5 A 8 70
6 B 1 75
7 B 3 65", header = TRUE)
We can use complete from the tidyr package to complete the dataset by day, and then use lead from dplyr and rollapply from zoo to find the minimum of the next two days.
library(dplyr)
library(tidyr)
library(zoo)
DF2 <- DF %>%
group_by(item) %>%
complete(day = full_seq(day, period = 1)) %>%
mutate(output = rollapply(lead(val), width = 2, FUN = min, na.rm = TRUE,
fill = NA, align = "left")) %>%
drop_na(val) %>%
ungroup() %>%
mutate(output = ifelse(output == Inf, NA, output))
DF2
# # A tibble: 7 x 4
# item day val output
# <chr> <dbl> <int> <dbl>
# 1 A 1.00 90 100
# 2 A 2.00 100 110
# 3 A 3.00 110 80.0
# 4 A 5.00 80 NA
# 5 A 8.00 70 NA
# 6 B 1.00 75 65.0
# 7 B 3.00 65 NA
DATA
DF <- read.table(text = "item day val
1 A 1 90
2 A 2 100
3 A 3 110
4 A 5 80
5 A 8 70
6 B 1 75
7 B 3 65",
header = TRUE, stringsAsFactors = FALSE)
We'll create a dataset with modified day, so we can left join it on the original dataset, keeping only minimum value.
df %>%
left_join(
bind_rows(mutate(.,day=day-1),mutate(.,day=day-2)) %>% rename(output=val)) %>%
group_by(item,day,val) %>%
summarize_at("output",min) %>%
ungroup
# # A tibble: 7 x 4
# item day val output
# <fctr> <dbl> <int> <dbl>
# 1 A 1 90 100
# 2 A 2 100 110
# 3 A 3 110 80
# 4 A 5 80 NA
# 5 A 8 70 NA
# 6 B 1 75 65
# 7 B 3 65 NA
data
df <- read.table(text = " item day val
1 A 1 90
2 A 2 100
3 A 3 110
4 A 5 80
5 A 8 70
6 B 1 75
7 B 3 65", header = TRUE)
I have a dataframe df with an ID variable and daily dates (format XYYYYMMDD) as column headers:
ID <- c(101,102,203,207,209)
X20170101 <- c(1,NA,NA,2,1)
X20170102 <- c(NA,1,1,1,NA)
X20170103<-c(NA,NA,NA,2,1)
X20170201<-c(NA,2,NA,NA,1)
X20170202<-c(NA,1,1,NA,NA)
X20170301<-c(NA,1,NA,NA,NA)
df <- data.table(ID,X20170101,X20170102,X20170103,X20170201,X20170202,X20170301)
ID X20170101 X20170102 X20170103 X20170201 X20170202 X20170301
101 1 NA NA NA NA NA
102 NA 1 NA 2 1 1
203 NA 1 NA NA 1 NA
207 2 1 2 NA NA NA
209 1 NA 1 1 NA NA
For each ID, I would like to sum across all dates/columns belonging to the same month. If yyyymm is the vector of strings for the first three months
yyyymm <- c("X201701","X201702","X201703")
I would like to obtain the dataframe want with strings in yyyymm as headers of the columns. That is:
ID X201701 X201702 X201703
101 1 NA NA
102 1 3 1
203 1 1 NA
207 5 NA NA
209 2 1 NA
My idea was to avoid reshaping the format of my dataset and use functions lapply and grepl to partially match the strings, but I'm missing something.
test = lapply(df, function(x) colSums(df[,grepl(x, names(df))]))
Many thanks.
Here's one using lubridate package to parse dates and split.default to divide data.frame into groups based on same month
library(lubridate)
factors = sapply(ymd(gsub("X", "", names(df)[-1])), function(x)
paste0(year(x), sprintf("%02d", as.integer(month(x)))))
data.frame(df[,1],
lapply(split.default(df[,-1], factors), function(x)
rowSums(x, na.rm = TRUE) * (NA^(rowSums(is.na(x)) == NCOL(x)))))
# ID X201701 X201702 X201703
#1 101 1 NA NA
#2 102 1 3 1
#3 203 1 1 NA
#4 207 5 NA NA
#5 209 2 1 NA
Is there a reason you don't want to spread your data?
library(tidyverse)
want <- df %>%
gather(key, value, -ID) %>%
mutate(key = substr(key, 1, 7)) %>%
group_by(ID, key) %>%
summarise(value = sum(value, na.rm=TRUE)) %>%
spread(key, value)
# A tibble: 5 x 4
# Groups: ID [5]
ID X201701 X201702 X201703
* <dbl> <dbl> <dbl> <dbl>
1 101 1 0 0
2 102 1 3 1
3 203 1 1 0
4 207 5 0 0
5 209 2 1 0