Create matrix of counts using two variables - r

I have two columns - a unique id column id and the day of travel day. My objective is to create a matrix of counts per id per day (and to include all days even if the count is zero)
> test
id day
1 3 3
2 4 4
3 1 4
4 2 3
5 2 5
6 2 4
7 1 1
8 5 4
9 1 1
10 3 2
11 2 2
12 4 2
13 2 4
14 2 5
15 4 5
16 3 4
17 5 3
18 3 2
19 5 5
20 3 4
21 1 3
22 2 3
23 2 5
24 5 2
25 3 2
The output should be the following, where rows represent id and columns represent day:
> output
1 2 3 4 5
1 2 0 1 1 0
2 0 1 2 2 3
3 0 3 1 2 0
4 0 1 0 1 1
5 0 1 1 1 1
I have tried the following with the reshape package
output <- reshape2::dcast(test, day ~ id, sum)
but it throws the following error:
Error in unique.default(x) : unique() applies only to vectors
Why does this happen and what would the right solution be in dplyr or using base R? Any tips would be appreciated.
Here is the data:
> dput(test)
structure(list(id = c(3, 4, 1, 2, 2, 2, 1, 5, 1, 3, 2, 4, 2,
2, 4, 3, 5, 3, 5, 3, 1, 2, 2, 5, 3), day = c(3, 4, 4, 3, 5, 4,
1, 4, 1, 2, 2, 2, 4, 5, 5, 4, 3, 2, 5, 4, 3, 3, 5, 2, 2)), .Names = c("id",
"day"), row.names = c(NA, -25L), class = "data.frame")

Easier to see whats going on with character variables
id <- c('a', 'a', 'b', 'f', 'b', 'a')
day <- c('x', 'x', 'x', 'y', 'z', 'x')
test <- data.frame(id, day)
output <- as.data.frame.matrix(table(test))
This is the simplest way to do it...use the table() function then convert to data.frame

ans <- tapply(test$id, test$day,
function(x) {
y <- table(x)
z <- rep(0, 5)
z[as.numeric(names(y))] <- y
z
} )
do.call("cbind", ans)
1 2 3 4 5
[1,] 2 0 1 1 0
[2,] 0 1 2 2 3
[3,] 0 3 1 2 0
[4,] 0 1 0 1 1
[5,] 0 1 1 1 1

Related

Counter sequential of specific values in R

I have a column like that :
a = c(3, 1, 2, 3, 3, 3, 1, 3, 2, 3, 3, 1, 3, 2, 1, 3, 1)
I want to have a column that counts 1 and 2 sequentially to make a column like this:
a b
1 3 0
2 1 1
3 2 2
4 3 2
5 3 2
6 3 2
7 1 3
8 3 3
9 2 4
10 3 4
11 3 4
12 1 5
13 3 5
14 2 6
15 1 7
16 3 7
We can use cumsum on a logical vector
df1$b <- cumsum(df1$a %in% c(1, 2))
data
df1 <- data.frame(a)

R: How to start a new sub_id each time a new sequence begins

Suppose I have data as follows:
tibble(
A = c(1, 2, 2, 2, 2, 2, 3, 3, 3, 3, 4, 4, 4, 4, 4, 5),
B = c(1, 1, 2, 1, 2, 3, 1, 2, 1, 1, 1, 2, 3, 4, 1, 1),
)
i.e.,
# A tibble: 16 x 2
A B
<dbl> <dbl>
1 1 1
2 2 1
3 2 2
4 2 1
5 2 2
6 2 3
7 3 1
8 3 2
9 3 1
10 3 1
11 4 1
12 4 2
13 4 3
14 4 4
15 4 1
16 5 1
How do I create a sub_id each time a new sequence begins within the group defined by variable A, i.e.,
tibble(
A = c(1, 2, 2, 2, 2, 2, 3, 3, 3, 3, 4, 4, 4, 4, 4, 5),
B = c(1, 1, 2, 1, 2, 3, 1, 2, 1, 1, 1, 2, 3, 4, 1, 1),
sub_id = c(1, 1, 1, 2, 2, 2, 1, 1, 2, 3, 1, 1, 1, 1, 2, 1)
)
# A tibble: 16 x 3
A B sub_id
<dbl> <dbl> <dbl>
1 1 1 1
2 2 1 1
3 2 2 1
4 2 1 2
5 2 2 2
6 2 3 2
7 3 1 1
8 3 2 1
9 3 1 2
10 3 1 3
11 4 1 1
12 4 2 1
13 4 3 1
14 4 4 1
15 4 1 2
16 5 1 1
Hopefully that’s well defined. I suppose I’m after a kind of inverse to row_number
Thanks in advance,
James.
Using base R
df$sub_id <- with(df, ave(B ==1, A, FUN = cumsum))
You got the "ingredients" already laid out.
(i) for each group of column A
(ii) check if a new sequence starts
The following is based on {dplyr}. For demo purposes, I create an additional column/variable to show the "start condition". You can combine this into one call.
I use the fact that summing over TRUE/FALSE codes TRUE as 1. If this is not evident for you, you can use as.numeric(B == 1)
library(dplyr)
library(tibble)
# load example data
df <- tibble(
A = c(1, 2, 2, 2, 2, 2, 3, 3, 3, 3, 4, 4, 4, 4, 4, 5),
B = c(1, 1, 2, 1, 2, 3, 1, 2, 1, 1, 1, 2, 3, 4, 1, 1),
sub_id = c(1, 1, 1, 2, 2, 2, 1, 1, 2, 3, 1, 1, 1, 1, 2, 1)
)
# perform group-wise operations
df %>%
group_by(A) %>%
mutate(
# --------------- highlight start of new sequence --------------
start = B == 1
# --------------- create cumsum over TRUEs----------------------
, sub_id2 = cumsum(start)
)
This yields what you looked for:
# A tibble: 16 x 5
# Groups: A [5]
A B sub_id start sub_id2
<dbl> <dbl> <dbl> <lgl> <int>
1 1 1 1 TRUE 1
2 2 1 1 TRUE 1
3 2 2 1 FALSE 1
4 2 1 2 TRUE 2
5 2 2 2 FALSE 2
6 2 3 2 FALSE 2
7 3 1 1 TRUE 1
8 3 2 1 FALSE 1
9 3 1 2 TRUE 2
10 3 1 3 TRUE 3
11 4 1 1 TRUE 1
12 4 2 1 FALSE 1
13 4 3 1 FALSE 1
14 4 4 1 FALSE 1
15 4 1 2 TRUE 2
16 5 1 1 TRUE 1
We could use group_by and cumsum:
library(dplyr)
df %>%
group_by(A) %>%
mutate(sub_id = cumsum(B==1)
Output:
# Groups: A [5]
A B sub_id
<dbl> <dbl> <int>
1 1 1 1
2 2 1 1
3 2 2 1
4 2 1 2
5 2 2 2
6 2 3 2
7 3 1 1
8 3 2 1
9 3 1 2
10 3 1 3
11 4 1 1
12 4 2 1
13 4 3 1
14 4 4 1
15 4 1 2
16 5 1 1
A data.table option
> setDT(df)[, sub_id := cumsum(B == 1), A][]
A B sub_id
1: 1 1 1
2: 2 1 1
3: 2 2 1
4: 2 1 2
5: 2 2 2
6: 2 3 2
7: 3 1 1
8: 3 2 1
9: 3 1 2
10: 3 1 3
11: 4 1 1
12: 4 2 1
13: 4 3 1
14: 4 4 1
15: 4 1 2
16: 5 1 1

subsetting !is.na for multiple conditions unexpected results

I am trying to remove rows in a df when NA appear in two specific columns.
Example dataframe
tmp <- data.frame(state = c(1, 1, 2, 2, 3, 3, 4, 5),
reg = c(NA, 3, 6, NA, 9, 1, NA, 7),
gas = c(NA, 5, NA, 9, 1, 3, NA, 1),
other = c(1, 2, 4, 2, 6, 8, 1, 1) )
from the table you can see there are two rows where both "reg" and "gas" are NA
table(tmp$reg, tmp$gas, useNA = 'always')
1 3 5 9 <NA>
1 0 1 0 0 0
3 0 0 1 0 0
6 0 0 0 0 1
7 1 0 0 0 0
9 1 0 0 0 0
<NA> 0 0 0 1 2
I would like to remove these rows but retain the other NA values.
I tried this code:
tmp[!is.na(tmp$reg & tmp$gas), ]
but it removes all lines with NA in reg and gas
state reg gas other
2 1 3 5 2
5 3 9 1 6
6 3 1 3 8
8 5 7 1 1
This is the result that I am looking for:
state reg gas other
2 1 3 5 2
3 2 6 NA 4
4 2 NA 9 2
5 3 9 1 6
6 3 1 3 8
8 5 7 1 1
I also tried
tmp[which(!is.na(tmp$reg & tmp$gas)), ]
but that produces the same unwanted result.
I don't know why the initial approach didn't work, but I guess there is some fault in the chaining that I can not see. Taking the opposite approach (removing those that fulfills the condition) seems to produce the desired output.
tmp <- data.frame(state = c(1, 1, 2, 2, 3, 3, 4, 5),
reg = c(NA, 3, 6, NA, 9, 1, NA, 7),
gas = c(NA, 5, NA, 9, 1, 3, NA, 1),
other = c(1, 2, 4, 2, 6, 8, 1, 1) )
res = tmp[-which(is.na(tmp$reg) & is.na(tmp$gas)),]
res
#> state reg gas other
#> 2 1 3 5 2
#> 3 2 6 NA 4
#> 4 2 NA 9 2
#> 5 3 9 1 6
#> 6 3 1 3 8
#> 8 5 7 1 1
Created on 2020-12-24 by the reprex package (v0.3.0)

Clustering rows by group based on column value with conditions

A few days ago I opened this thread:
Clustering rows by group based on column value
In which we obtained this result:
df <- data.frame(ID = c(1,1,1,1,1,1,1,1,1,1,1, 1, 1,1,1,1,1),
Obs1 = c(1,1,0,1,0,1,1,0,1,0,0,0,1,1,1,1,1),
Control = c(0,3,3,1,12,1,1,1,36,13,1,1,2,24,2,2,48),
ClusterObs1 = c(1, 1, 1, 2, 2, 3, 3, 3, 4, 4, 4, 4, 5, 5, 5, 5, 5))
With:
df <- df %>%
group_by(ID) %>%
mutate_at(vars(Obs1),
funs(ClusterObs1= with(rle(.), rep(cumsum(values == 1), lengths))))
Now I have to make some modifications:
If value of 'Control' is higher than 12 and actual 'Obs1' value is equal to 1 and to previous 'Obs1' value, 'DesiredResultClusterObs1' value should add +1
df <- data.frame(ID = c(1,1,1,1,1,1,1,1,1,1,1, 1, 1,1,1,1,1),
Obs1 = c(1,1,0,1,0,1,1,0,1,0,0,0,1,1,1,1,1),
Control = c(0,3,3,1,12,1,1,1,36,13,1,1,2,24,2,2,48),
ClusterObs1 = c(1, 1, 1, 2, 2, 3, 3, 3, 4, 4, 4, 4, 5, 5, 5, 5, 5),
DesiredResultClusterObs1 = c(1, 1, 1, 2, 2, 3, 3, 3, 4, 4, 4, 4, 5, 6, 6, 6, 7))
I have considered add if_else condition with lag in funs but unsuccessfully, any ideas?
EDIT: How it would be for many columns?
This seems to work:
df %>%
mutate(DesiredResultClusterOrbs1 = with(rle(Control > 12 & Obs1 == 1 & lag(Obs1) == 1),
rep(cumsum(values == 1), lengths)) + ClusterObs1)
ID Obs1 Control ClusterObs1 DesiredResultClusterOrbs1
1 1 1 0 1 1
2 1 1 3 1 1
3 1 0 3 1 1
4 1 1 1 2 2
5 1 0 12 2 2
6 1 1 1 3 3
7 1 1 1 3 3
8 1 0 1 3 3
9 1 1 36 4 4
10 1 0 13 4 4
11 1 0 1 4 4
12 1 0 1 4 4
13 1 1 2 5 5
14 1 1 24 5 6
15 1 1 2 5 6
16 1 1 2 5 6
17 1 1 48 5 7
Basically, we use the rle+rep mechanic from your previous thread to create a cumulative vector from the TRUE/FALSE result of your conditions and add it to the existing ClusterObs1.
If you want to create multiple DesiredResultClusterOrbs, you can use mapply. Maybe there's a dplyr solution for this, but this is base R.
Data:
df <- data.frame(ID = c(1,1,1,1,1,1,1,1,1,1,1, 1, 1,1,1,1,1),
Obs1 = c(1,1,0,1,0,1,1,0,1,0,0,0,1,1,1,1,1),
Obs2 = rbinom(17, 1, .5),
Control = c(0,3,3,1,12,1,1,1,36,13,1,1,2,24,2,2,48),
ClusterObs1 = c(1, 1, 1, 2, 2, 3, 3, 3, 4, 4, 4, 4, 5, 5, 5, 5, 5))
df <- df %>%
mutate_at(vars(Obs2),
funs(ClusterObs2= with(rle(.), rep(cumsum(values == 1), lengths))))
The loop:
newcols <- mapply(function(x, y){
with(rle(df$Control > 12 & x == 1 & lag(x) == 1),
rep(cumsum(values == 1), lengths)) + y
}, df[2:3], df[5:6])
This produces a matrix with the new columns, which you can then rename and cbind to your data:
colnames(newcols) <- paste0("DesiredResultClusterOrbs", 1:2)
cbind.data.frame(df, newcols)
ID Obs1 Obs2 Control ClusterObs1 ClusterObs2 DesiredResultClusterOrbs1 DesiredResultClusterOrbs2
1 1 1 1 0 1 1 1 1
2 1 1 1 3 1 1 1 1
3 1 0 0 3 1 1 1 1
4 1 1 0 1 2 1 2 1
5 1 0 0 12 2 1 2 1
6 1 1 0 1 3 1 3 1
7 1 1 1 1 3 2 3 2
8 1 0 0 1 3 2 3 2
9 1 1 1 36 4 3 4 3
10 1 0 1 13 4 3 4 4
11 1 0 0 1 4 3 4 4
12 1 0 1 1 4 4 4 5
13 1 1 1 2 5 4 5 5
14 1 1 0 24 5 4 6 5
15 1 1 1 2 5 5 6 6
16 1 1 1 2 5 5 6 6
17 1 1 1 48 5 5 7 7

How to sort dataframe in descending order

I have a data.frame(v1,v2,y)
v1: 1 5 8 6 1 1 6 8
v2: 2 6 9 8 4 5 2 3
y: 1 1 2 2 3 3 4 4
and now I want it sorted by y like this:
y: 1 2 3 4 1 2 3 4
v1: 1 8 1 6 5 6 1 8
v2: 2 9 4 2 6 8 5 3
I tried:
sorted <- df[,,sort(df$y)]
but this does not work.. please help
You can try a tidyverse solution
library(tidyverse)
data.frame(y, v1, v2) %>%
group_by(y) %>%
mutate(n=1:n()) %>%
arrange(n, y) %>%
select(-n) %>%
ungroup()
# A tibble: 8 x 3
y v1 v2
<dbl> <dbl> <dbl>
1 1 1 2
2 2 8 9
3 3 1 4
4 4 6 2
5 1 5 6
6 2 6 8
7 3 1 5
8 4 8 3
data:
v1 <- c(1, 5, 8, 6, 1, 1, 6, 8)
v2<- c( 2, 6, 9, 8, 4, 5, 2, 3)
y<- c(1, 1, 2, 2, 3, 3, 4, 4 )
Idea is to add an index along y and then arrange by the index and y.
We can use ave from base R to create a sequence by 'y' group and order on it
df[order(with(df, ave(y, y, FUN = seq_along))),]
# v1 v2 y
#1 1 2 1
#3 8 9 2
#5 1 4 3
#7 6 2 4
#2 5 6 1
#4 6 8 2
#6 1 5 3
#8 8 3 4
data
df <- data.frame(v1 = c(1, 5, 8, 6, 1, 1, 6, 8),
v2 = c(2, 6, 9, 8, 4, 5, 2, 3),
y = c(1, 1, 2, 2, 3, 3, 4, 4))
You could also do alternating subset twice and rbind these together:
rbind(df[c(TRUE,FALSE),], df[c(FALSE,TRUE),])
The result:
v1 v2 y
1 1 2 1
3 8 9 2
5 1 4 3
7 6 2 4
2 5 6 1
4 6 8 2
6 1 5 3
8 8 3 4
You can use matrix() to reorder the indizes of the rows:
df <- data.frame(v1 = c(1, 5, 8, 6, 1, 1, 6, 8),
v2 = c(2, 6, 9, 8, 4, 5, 2, 3),
y = c(1, 1, 2, 2, 3, 3, 4, 4))
df[c(matrix(1:nrow(df), ncol=2, byrow=TRUE)),]
# v1 v2 y
# 1 1 2 1
# 3 8 9 2
# 5 1 4 3
# 7 6 2 4
# 2 5 6 1
# 4 6 8 2
# 6 1 5 3
# 8 8 3 4
The solution uses the property in which order the elements of the matrix are stored (in R it is like in FORTRAN) - the index of the first dimension is running first. In FORTRAN one uses the terminus leading dimension for the number of values for this first dimension (for a 2-dimensional array, i.e. a matrix, it is the number of rows).

Resources