R Create a table of like against like - r

Could someone help me with this transformation in R? I would like to transform
this table
ID
Condition
Count
1
A
1
1
B
0
2
A
1
2
B
1
3
A
0
3
B
1
4
A
1
4
B
1
5
A
1
5
B
1
6
A
1
6
B
0
7
A
0
7
B
1
8
A
0
9
B
0
into this table
To create a table of like-against like
A
B
Count of ID
1
0
2
0
0
1
1
1
3
0
1
2
Any help would be appreciated. Thank you.
Phil,

You can do:
with(dat, split(Count, Condition)) |>
table() |>
data.frame()
A B Freq
1 0 0 1
2 1 0 2
3 0 1 2
4 1 1 3
Data:
dat <- structure(list(ID = c(1, 1, 2, 2, 3, 3, 4, 4, 5, 5, 6, 6, 7,
7, 8, 9), Condition = c("A", "B", "A", "B", "A", "B", "A", "B",
"A", "B", "A", "B", "A", "B", "A", "B"), Count = c(1, 0, 1, 1,
0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0)), class = "data.frame", row.names = c(NA,
-16L))

Here is a tidyverse solution. I filled missing values with 0, please note that this leads to a different count than in your table (do you mean to have 8, 8 as the last two IDs and not 8, 9?):
data <- read.table(text = "ID Condition Count
1 A 1
1 B 0
2 A 1
2 B 1
3 A 0
3 B 1
4 A 1
4 B 1
5 A 1
5 B 1
6 A 1
6 B 0
7 A 0
7 B 1
8 A 0
9 B 0", header = TRUE)
library(tidyr)
library(dplyr)
#>
#> Attaching package: 'dplyr'
#> The following objects are masked from 'package:stats':
#>
#> filter, lag
#> The following objects are masked from 'package:base':
#>
#> intersect, setdiff, setequal, union
data %>%
pivot_wider(
id_cols = ID,
names_from = Condition,
values_from = Count,
values_fill = 0
) %>%
count(A, B, name = "Count of ID")
#> # A tibble: 4 × 3
#> A B `Count of ID`
#> <int> <int> <int>
#> 1 0 0 2
#> 2 0 1 2
#> 3 1 0 2
#> 4 1 1 3
Created on 2023-01-20 by the reprex package (v1.0.0)

Related

How to subtract value of one group from other groups in R

I am trying to subtract the value of one group from another. I am hoping to use tidyverse
structure(list(A = c(1, 1, 1, 2, 2, 2, 3, 3, 3), group = c("a",
"b", "c", "a", "b", "c", "a", "b", "c"), value = c(10, 11, 12,
11, 40, 23, 71, 72, 91)), class = "data.frame", row.names = c(NA,
-9L))
That is my data, and I want to subtract all values of group A from B and C, and store the difference in one variable.
baseR solution
df$new <- df$value - ave(df$value, df$A, FUN = function(x) mean(x[df$group == 'a'], na.rm = T) )
> df
A group value new
1 1 a 10 0
2 1 b 11 1
3 1 c 12 2
4 2 a 11 0
5 2 b 40 29
6 2 c 23 12
7 3 a 71 0
8 3 b 72 1
9 3 c 91 20
dplyr method (assumption there is not more than one a value per group, else R will confuse which value to substract and result in error)
df %>% group_by(A) %>% mutate(new = ifelse(group != 'a', value - value[group == 'a'], value) )
# A tibble: 9 x 4
# Groups: A [3]
A group value new
<dbl> <chr> <dbl> <dbl>
1 1 a 10 10
2 1 b 11 1
3 1 c 12 2
4 2 a 11 11
5 2 b 40 29
6 2 c 23 12
7 3 a 71 71
8 3 b 72 1
9 3 c 91 20
or if you want to change all values
df %>% group_by(A) %>% mutate(new = value - value[group == 'a'] )
# A tibble: 9 x 4
# Groups: A [3]
A group value new
<dbl> <chr> <dbl> <dbl>
1 1 a 10 0
2 1 b 11 1
3 1 c 12 2
4 2 a 11 0
5 2 b 40 29
6 2 c 23 12
7 3 a 71 0
8 3 b 72 1
9 3 c 91 20
I only used data.table rather than data.frame because I'm more familiar.
library(data.table)
data <- setDT(structure(list(A = c(1, 1, 1, 2, 2, 2, 3, 3, 3), group = c("a",
"b", "c", "a", "b", "c", "a", "b", "c"), value = c(10, 11, 12,
11, 40, 23, 71, 72, 91)), class = "data.frame", row.names = c(NA,-9L)))
for (i in 1:length(unique(data$A))){
data[A == i, substraction := data[A == i, 'value'] - data[A == i & group == 'a', value]]
}

How to create dummy variables per group of another variable in tidyverse

I want create (dummy) variables that show whether an observation is in a group of observations (Identifiable by a common Group_ID) with a certain combination of characteristics across that group. The code example makes it clearer what I exactly mean.
I tried combinations of group_by and caret::dummyVars, but had no success. I am running out of ideas - any help would be appreciated very much.
library(tidyverse)
# Input data
# please note: in my case each value of the column Role will appear only once per Group_ID.
input_data <- tribble( ~Group_ID, ~Role, ~Income,
#--|--|----
1, "a", 3.6,
1, "b", 8.5,
2, "a", 7.6,
2, "c", 9.5,
2, "d", 9.7,
3, "a", 1.6,
3, "b", 4.5,
3, "c", 2.7,
3, "e", 7.7,
4, "b", 3.3,
4, "c", 6.2,
)
# desired output
output_data <- tribble( ~Group_ID, ~Role, ~Income, ~Role_A, ~Role_B, ~Role_C, ~Role_D, ~Role_E, ~All_roles,
#--|--|----
1, "a", 3.6, 1, 1, 0, 0, 0, "ab",
1, "b", 8.5, 1, 1, 0, 0, 0, "ab",
2, "a", 7.6, 1, 0, 1, 1, 0, "acd",
2, "c", 9.5, 1, 0, 1, 1, 0, "acd",
2, "d", 9.7, 1, 0, 1, 1, 0, "acd",
3, "a", 1.6, 1, 1, 1, 0, 1, "abce",
3, "b", 4.5, 1, 1, 1, 0, 1, "abce",
3, "c", 2.7, 1, 1, 1, 0, 1, "abce",
3, "e", 7.7, 1, 1, 1, 0, 1, "abce",
4, "b", 3.3, 0, 1, 1, 0, 0, "bc",
4, "c", 6.2, 0, 1, 1, 0, 0, "bc"
)
The following takes advantage of base R modeling functions to create the dummies.
First, create a model matrix with no intercept.
fit <- lm(Group_ID ~ 0 + Role, input_data)
m <- model.matrix(fit)
Now, process that matrix by noting that the dummies the question asks for are the sums by groups of Group_ID.
input_data %>%
bind_cols(m %>% as.data.frame()) %>%
group_by(Group_ID) %>%
mutate_at(vars(matches("Role[[:alpha:]]")), sum) %>%
mutate(all_roles = paste(Role, collapse = ""))
## A tibble: 11 x 9
## Groups: Group_ID [4]
# Group_ID Role Income Rolea Roleb Rolec Roled Rolee all_roles
# <dbl> <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <chr>
# 1 1 a 3.6 1 1 0 0 0 ab
# 2 1 b 8.5 1 1 0 0 0 ab
# 3 2 a 7.6 1 0 1 1 0 acd
# 4 2 c 9.5 1 0 1 1 0 acd
# 5 2 d 9.7 1 0 1 1 0 acd
# 6 3 a 1.6 1 1 1 0 1 abce
# 7 3 b 4.5 1 1 1 0 1 abce
# 8 3 c 2.7 1 1 1 0 1 abce
# 9 3 e 7.7 1 1 1 0 1 abce
#10 4 b 3.3 0 1 1 0 0 bc
#11 4 c 6.2 0 1 1 0 0 bc
Using dplyr and cSplit_e from splitstackshape. For every Group_ID we paste the Role together and then separate them into new columns of binary value based on their presence and absence using cSplit_e.
library(splitstackshape)
library(dplyr)
input_data %>%
group_by(Group_ID) %>%
mutate(new_role = paste(Role, collapse = "")) %>%
ungroup() %>%
cSplit_e("new_role", sep = "", type = "character", fill = 0)
# Group_ID Role Income new_role new_role_a new_role_b new_role_c new_role_d new_role_e
#1 1 a 3.6 ab 1 1 0 0 0
#2 1 b 8.5 ab 1 1 0 0 0
#3 2 a 7.6 acd 1 0 1 1 0
#4 2 c 9.5 acd 1 0 1 1 0
#5 2 d 9.7 acd 1 0 1 1 0
#6 3 a 1.6 abce 1 1 1 0 1
#7 3 b 4.5 abce 1 1 1 0 1
#8 3 c 2.7 abce 1 1 1 0 1
#9 3 e 7.7 abce 1 1 1 0 1
#10 4 b 3.3 bc 0 1 1 0 0
#11 4 c 6.2 bc 0 1 1 0 0

How to add a column with progressive number based on condition

I am trying to add a column to my existing data set.
The data set has three columns:
Student (which is the column with the participant ID),
Week (the number of the week of the year during which the data were collected),
and
Day (the number of the weekday during which the data were
collected).
Now, a new column Obs that I am trying to create would contain a progressive number (from 1 to n) referring to the week during which every student was tested.
I have tried to use group_by in combination with rep but it does not seem to produce the result I want:
Week <- c(1, 1, 1, 2, 2, 2, 3, 3, 4, 4, 4, 4)
Day <- c(1, 2, 3, 2, 3, 5, 1, 3, 2, 3, 4, 5)
Student <- c("A", "A", "A", "B", "B", "B", "B", "B", "C", "C", "C", "C")
fake.db <- data.frame(Student, Week, Day)
library(dplyr)
fake.db %>%
group_by(Student) %>%
mutate(Obs = rep(1:length(Student), each = Week))
# Student Week Day Obs
# <fct> <dbl> <dbl> <int>
# 1 A 1 1 1
# 2 A 1 2 2
# 3 A 1 3 3
# 4 B 2 2 1
# 5 B 2 3 2
# 6 B 2 5 3
# 7 B 3 1 4
# 8 B 3 3 5
# 9 C 4 2 1
#10 C 4 3 2
#11 C 4 4 3
#12 C 4 5 4
What I would like to obtain is different. For the first week of data collection, 1 should be reported, and for the students for whom data were collected during a second week, 2 should be reported, etc.:
# Student Week Day Obs
#1 A 1 1 1
#2 A 1 2 1
#3 A 1 3 1
#4 B 2 2 1
#5 B 2 3 1
#6 B 2 5 1
#7 B 3 1 2
#8 B 3 3 2
#9 C 4 2 1
#10 C 4 3 1
#11 C 4 4 1
#12 C 4 5 1
One dplyr possibility could be:
fake.db %>%
group_by(Student) %>%
mutate(Obs = cumsum(!duplicated(Week)))
Student Week Day Obs
<fct> <dbl> <dbl> <int>
1 A 1 1 1
2 A 1 2 1
3 A 1 3 1
4 B 2 2 1
5 B 2 3 1
6 B 2 5 1
7 B 3 1 2
8 B 3 3 2
9 C 4 2 1
10 C 4 3 1
11 C 4 4 1
12 C 4 5 1
It groups by "Student" column and calculates the cumulative sum of non-duplicate "Week" values.
Or:
fake.db %>%
group_by(Student) %>%
mutate(Obs = with(rle(Week), rep(seq_along(lengths), lengths)))
It groups by "Student" column and creates a run-length type group ID around "Week" column".
Or:
fake.db %>%
group_by(Student) %>%
mutate(Obs = dense_rank(Week))
It groups by "Student" column and ranks the values in "Week" column.
What I understand the issue to be is that you want to count the weeks since the first test week for each student. I.e. Week 2 is student B's first week of testing, so it gets Obs = 1. That means you can do a grouped mutate:
library(dplyr)
fake.db <- structure(list(Student = structure(c(1L, 1L, 1L, 2L, 2L, 2L, 2L, 2L, 3L, 3L, 3L, 3L), .Label = c("A", "B", "C"), class = "factor"), Week = c(1, 1, 1, 2, 2, 2, 3, 3, 4, 4, 4, 4), Day = c(1, 2, 3, 2, 3, 5, 1, 3, 2, 3, 4, 5)), class = "data.frame", row.names = c(NA, -12L))
fake.db %>%
group_by(Student) %>%
mutate(Obs = Week - min(Week) + 1)
#> # A tibble: 12 x 4
#> # Groups: Student [3]
#> Student Week Day Obs
#> <fct> <dbl> <dbl> <dbl>
#> 1 A 1 1 1
#> 2 A 1 2 1
#> 3 A 1 3 1
#> 4 B 2 2 1
#> 5 B 2 3 1
#> 6 B 2 5 1
#> 7 B 3 1 2
#> 8 B 3 3 2
#> 9 C 4 2 1
#> 10 C 4 3 1
#> 11 C 4 4 1
#> 12 C 4 5 1
Created on 2019-05-10 by the reprex package (v0.2.1)
A brief method with by
unlist(by(fake.db, fake.db[, 1], function(x) as.numeric(factor(x[, 2]))))
# A1 A2 A3 B1 B2 B3 B4 B5 C1 C2 C3 C4
# 1 1 1 1 1 1 2 2 1 1 1 1
Data
fake.db <- structure(list(Student = structure(c(1L, 1L, 1L, 2L, 2L, 2L,
2L, 2L, 3L, 3L, 3L, 3L), .Label = c("A", "B", "C"), class = "factor"),
Week = c(1, 1, 1, 2, 2, 2, 3, 3, 4, 4, 4, 4), Day = c(1,
2, 3, 2, 3, 5, 1, 3, 2, 3, 4, 5)), class = "data.frame", row.names = c(NA,
-12L))
You can see if there is a non-zero difference
fake.db %>%
group_by(Student) %>%
arrange(Week) %>%
mutate(Obs = cumsum(c(1, diff(Week)!=0)))
or if they values arne't numeric, you can compare to the lag value
fake.db %>%
group_by(Student) %>%
arrange(Week) %>%
mutate(Obs = cumsum(Week != lag(Week, default=first(Week))) + 1)

Dplyr rolling balance

I am trying to compute a balance column.
So, to show an example, I want to go from this:
df <- data.frame(group = c("A", "A", "A", "A", "A"),
start = c(5, 0, 0, 0, 0),
receipt = c(1, 5, 6, 4, 6),
out = c(4, 5, 3, 2, 5))
> df
group start receipt out
1 A 5 1 4
2 A 0 5 5
3 A 0 6 3
4 A 0 4 2
5 A 0 6 5
to creating a new balance column like the following
> dfb
group start receipt out balance
1 A 5 1 4 2
2 A 0 5 5 2
3 A 0 6 3 5
4 A 0 4 2 7
5 A 0 6 5 8
I tried the following attempt but it isn't working
dfc <- df %>%
group_by(group) %>%
mutate(balance = if_else(row_number() == 1, start + receipt - out, (lag(balance) + receipt) - out)) %>%
ungroup()
Would really appreciate some help with this. Thanks!
You could use cumsum from dplyr. Note: I had to change your initial df table to match the one in your required result because you have different data in "out".
df <- data.frame(group = c("A", "A", "A", "A", "A"),
start = c(5, 0, 0, 0, 0),
receipt = c(1, 5, 6, 4, 6),
out = c(4, 5, 3, 2, 5))
dfc <- df %>%
group_by(group) %>%
mutate(balance=cumsum(start+receipt-out))
Source: local data frame [5 x 5]
Groups: group [1]
group start receipt out balance
<fctr> <dbl> <dbl> <dbl> <dbl>
1 A 5 1 4 2
2 A 0 5 5 2
3 A 0 6 3 5
4 A 0 4 2 7
5 A 0 6 5 8

Unique body count column

I'm trying to add a body count for each unique person. Each person has multiple data points.
df <- data.frame(PERSON = c("A", "A", "A", "B", "B", "C", "C", "C", "C"),
Y = c(2, 5, 4, 1, 2, 5, 3, 7, 1))
This is what I'd like it to look like:
PERSON Y UNIQ_CT
1 A 2 1
2 A 5 0
3 A 4 0
4 B 1 1
5 B 2 0
6 C 5 1
7 C 3 0
8 C 7 0
9 C 1 0
You can use duplicated and negate it:
transform(df, uniqct = as.integer(!duplicated(Person)))
Since there is dplyr tag to the question here is an option
library(dplyr)
df %>%
group_by(PERSON) %>%
mutate(UNIQ_CT = ifelse(row_number( ) == 1, 1, 0))

Resources