Obtaining descriptive statistics of observations with years of complete data in R - r

I have the following panel dataset
id year Value
1 1 50
2 1 55
2 2 40
3 1 48
3 2 54
3 3 24
4 2 24
4 3 57
4 4 30
I would like to obtain descriptive statistics of the number of years in which observations have information available, for example: the number of individuals with only one year of information is 1, the number of individuals with only two years of information is one, while the number of individuals with three years of available information is 2.

In base R using table and it's faster cousin tabulate:
table(tabulate(dat$id))
1 2 3
1 1 2
or
table(table(dat$id))
Convert to a data.frame:
data.frame(table(tabulate(dat$id)))
Var1 Freq
1 1 1
2 2 1
3 3 2

lapply(split(df$id, ave(df$year, df$id, FUN = length)), function(x) length(unique(x)))
#$`1`
#[1] 1
#$`2`
#[1] 1
#$`3`
#[1] 2

We can use data.table. Convert the 'data.frame' to 'data.table' (setDT(df1)), grouped by 'id', get the length of unique number of 'year', grouped by that column, get the number of rows (.N)
library(data.table)
setDT(df1)[, uniqueN(year), .(id)][, .N, V1]
# V1 N
#1: 1 1
#2: 2 1
#3: 3 2

Related

How to use apply function once for each unique factor value

I'm trying on some commands on the R-studio built-in databse, ChickWeight. The data looks as follows.
weight Time Chick Diet
1 42 0 1 1
2 51 2 1 1
3 59 4 1 1
4 64 6 1 1
5 76 8 1 1
6 93 10 1 1
7 106 12 1 1
8 125 14 1 1
9 149 16 1 1
10 171 18 1 1
11 199 20 1 1
12 205 21 1 1
13 40 0 2 1
14 49 2 2 1
15 58 4 2 1
Now what I would like to do is to simply output the difference between the chicken-weight for the "Chick" column for time 0 and 21 (last time value). I.e the weight the chick has put on.
I've been trying tapply(ChickWeight$weight, ChickWeight$Chick, function(x) x[length(x)] - x[1]). But this of course applies the value to all rows.
How do I make it so that it applies only once for each unique Chick-value?
If we need a single value per each 'factor' column (assuming that 'Chick', and 'Diet' are the factor columns)
library(data.table)
setDT(df1)[, list(Diff= abs(weight[Time==21]-weight[Time==0])) ,.(Chick, Diet)]
and If we need to create a column
setDT(df1)[, Diff:= abs(weight[Time==21]-weight[Time==0]) ,.(Chick, Diet)]
I noticed that in the example Time = 21 is not found in the Chick No:2, may be in that case, we need one of the number
setDT(df1)[, {tmp <- Time %in% c(0,21)
list(Diff= if(sum(tmp)>1) abs(diff(weight[tmp])) else weight[tmp]) } ,
by = .(Chick, Diet)]
# Chick Diet Diff
#1: 1 1 163
#2: 2 1 40
If we are taking the difference of 'weight' based on the max and min 'Time' for each group
setDT(df1)[, list(Diff=weight[which.max(Time)]-
weight[which.min(Time)]), .(Chick, Diet)]
# Chick Diet Diff
#1: 1 1 163
#2: 2 1 18
Also, if the 'Time' is ordered
setDT(df1)[, list(Diff= abs(diff(weight[c(1L,.N)]))), by =.(Chick, Diet)]
Using by from base R
by(df1[1:2], df1[3:4], FUN= function(x) with(x,
abs(weight[which.max(Time)]-weight[which.min(Time)])))
#Chick: 1
#Diet: 1
#[1] 163
#------------------------------------------------------------
#Chick: 2
#Diet: 1
#[1] 18
Here's a solution using dplyr:
ChickWeight %>%
group_by(Chick = as.numeric(as.character(Chick))) %>%
summarise(weight_gain = last(weight) - first(weight), final_time = last(Time))
(First and last as suggested by #ulfelder.)
Note that ChickWeight$Chick is an ordered factor so without coercing it into numeric the final order looks odd.
Using base R:
ChickWeight$Chick <- as.numeric(as.character(ChickWeight$Chick))
tapply(ChickWeight$weight, ChickWeight$Chick, function(x) x[length(x)] - x[1])

Create a rolling index of pairs over groups

I need to create (with R) a rolling index of pairs from a data set that includes groups. Consider the following data set:
times <- c(4,3,2)
V1 <- unlist(lapply(times, function(x) seq(1, x)))
df <- data.frame(group = rep(1:length(times), times = times),
V1 = V1,
rolling_index = c(1,1,2,2,3,3,4,5,5))
df
group V1 rolling_index
1 1 1 1
2 1 2 1
3 1 3 2
4 1 4 2
5 2 1 3
6 2 2 3
7 2 3 4
8 3 1 5
9 3 2 5
The data frame I have includes the variables group and V1. Within each group V1 designates a running index (that may or may not start at 1).
I want to create a new indexing variable that looks like rolling_index. This variable groups rows within the same group and consecutive V1 value, thus creating a new rolling index. This new index must be consecutive over groups. If there is an uneven amount of rows within a group (e.g. group 2), then the last, single row gets its own rolling index value.
You can try
library(data.table)
setDT(df)[, gr:=as.numeric(gl(.N, 2, .N)), group][,
rollindex:=cumsum(c(TRUE,abs(diff(gr))>0))][,gr:= NULL]
# group V1 rolling_index rollindex
#1: 1 1 1 1
#2: 1 2 1 1
#3: 1 3 2 2
#4: 1 4 2 2
#5: 2 1 3 3
#6: 2 2 3 3
#7: 2 3 4 4
#8: 3 1 5 5
#9: 3 2 5 5
Or using base R
indx1 <- !duplicated(df$group)
indx2 <- with(df, ave(group, group, FUN=function(x)
gl(length(x), 2, length(x))))
cumsum(c(TRUE,diff(indx2)>0)|indx1)
#[1] 1 1 2 2 3 3 4 5 5
Update
The above methods are based on the 'group' column. Suppose you already have a sequence column ('V1') by group as showed in the example, creation of rolling index is easier
cumsum(!!df$V1 %%2)
#[1] 1 1 2 2 3 3 4 5 5
As mentioned in the post, if the 'V1' column do not start at '1' for some groups, we can get the sequence from the 'group' and then do the cumsum as above
cumsum(!!with(df, ave(seq_along(group), group, FUN=seq_along))%%2)
#[1] 1 1 2 2 3 3 4 5 5
There is probably a simpler way but you can do:
rep_each <- unlist(mapply(function(q,r) {c(rep(2, q),rep(1, r))},
q=table(df$group)%/%2,
r=table(df$group)%%2))
df$rolling_index <- inverse.rle(x=list(lengths=rep_each, values=seq(rep_each)))
df$rolling_index
#[1] 1 1 2 2 3 3 4 5 5

How to count and flag unique values in r dataframe

I have the following dataframe:
data <- data.frame(week = c(rep("2014-01-06", 3), rep("2014-01-13", 3), rep("2014-01-20", 3)), values = c(1, 2, 3))
week values
1 2014-01-06 1
2 2014-01-06 2
3 2014-01-06 3
4 2014-01-13 1
5 2014-01-13 2
6 2014-01-13 3
7 2014-01-20 1
8 2014-01-20 2
9 2014-01-20 3
I'm wanting to create a column in data that counts the unique week and assigns it a sequential value, such that the df appears like this:
week values seq_value
1 2014-01-06 1 1
2 2014-01-06 2 1
3 2014-01-06 3 1
4 2014-01-13 1 2
5 2014-01-13 2 2
6 2014-01-13 3 2
7 2014-01-20 1 3
8 2014-01-20 2 3
9 2014-01-20 3 3
I guess the idiomatic way would be just to calculate the actual week of the year out of the date provided (in case your weeks are not starting from the first week of the year).
as.integer(format(as.Date(data$week), "%W"))
## [1] 1 1 1 2 2 2 3 3 3
Another base R solution would be using as.POSIXlt class and utilizing its yday attribute
as.POSIXlt(data$week)$yday %/% 7 + 1
## [1] 1 1 1 2 2 2 3 3 3
If you want a shorter syntax, data.table package (among many others - See #Kshashaas comment) offers a quick wrapper
library(data.table)
week(data$week)
## [1] 1 1 1 2 2 2 3 3 3
The nicest thing about this package is that you can create columns by reference (similar to #akruns last solution, but probably more efficient because doesn't require the by argument)
setDT(data)[, seq_value := week(week)]
You could use base R by converting the "week" column to factor and specifying the levels as the unique values of "week". Convert factor to numeric and get the numeric index of the levels.
data$seq_value <- with(data, as.numeric(factor(week,levels=unique(week) )))
data$seq_value
#[1] 1 1 1 2 2 2 3 3 3
Or match the "week" column to unique values of that column to get the numeric index.
with(data, match(week, unique(week)))
#[1] 1 1 1 2 2 2 3 3 3
Or using data.table, by first converting data.frame to data.table (setDT) and then get the index values (.GRP) of grouping variable 'week' and assign it to new column seq_value
library(data.table)
setDT(data)[,seq_value:=.GRP, week][]
A dplyr solution:
library(dplyr)
data %>%
mutate(seq_value = dense_rank(week))

Using R: Make a new column that counts the number of times 'n' conditions from 'n' other columns occur

I have columns 1 and 2 (ID and value). Next I would like a count column that lists the # of times that the same value occurs per id. If it occurs more than once, it will obviously repeat the value. There are other variables in this data set, but the new count variable needs to be conditional only on 2 of them. I have scoured this blog, but I can't find a way to make the new variable conditional on more than one variable.
ID Value Count
1 a 2
1 a 2
1 b 1
2 a 2
2 a 2
3 a 1
3 b 3
3 b 3
3 b 3
Thank you in advance!
You can use ave:
df <- within(df, Count <- ave(ID, list(ID, Value), FUN=length))
You can use ddply from plyr package:
library(plyr)
df1<-ddply(df,.(ID,Value), transform, count1=length(ID))
>df1
ID Value Count count1
1 1 a 2 2
2 1 a 2 2
3 1 b 1 1
4 2 a 2 2
5 2 a 2 2
6 3 a 1 1
7 3 b 3 3
8 3 b 3 3
9 3 b 3 3
> identical(df1$Count,df1$count1)
[1] TRUE
Update: As suggested by #Arun, you can replace transform with mutate if you are working with large data.frame
Of course, data.table also has a solution!
data[, Count := .N, by = list(ID, Value)
The built-in constant, ".N", is a length 1 vector reporting the number of observations in each group.
The downside to this approach would be joining this result with your initial data.frame (assuming you wish to retain the original dimensions).

How to aggregate some columns while keeping other columns in R?

I have a data frame like this:
id no age
1 1 7 23
2 1 2 23
3 2 1 25
4 2 4 25
5 3 6 23
6 3 1 23
and I hope to aggregate the date frame by id to a form like this: (just sum the no if they share the same id, but keep age there)
id no age
1 1 9 23
2 2 5 25
3 3 7 23
How to achieve this using R?
Assuming that your data frame is named df.
aggregate(no~id+age, df, sum)
# id age no
# 1 1 23 9
# 2 3 23 7
# 3 2 25 5
Even better, data.table:
library(data.table)
# convert your object to a data.table (by reference) to unlock data.table syntax
setDT(DF)
DF[ , .(sum_no = sum(no), unq_age = unique(age)), by = id]
Alternatively, you could use ddply from plyr package:
require(plyr)
ddply(df,.(id,age),summarise,no = sum(no))
In this particular example the results are identical. However, this is not always the case, the difference between the both functions is outlined here. Both functions have their uses and are worth exploring, which is why I felt this alternative should be mentioned.

Resources