switching list elements with dataframe rows - r

Consider my list IDs that has a dataframe of behaviours in each one:
IDs <- list(Dave = data.frame(Behaviour = c("Aggression","Interaction", "Nursing"), number = c(20,10,5), duration = c(60,39,27)),James = data.frame(Behaviour = c("Aggression","Interaction"), number = c(21,30), duration = c(30,49)))
IDs
$Dave
Behaviour number duration
1 Aggression 20 60
2 Interaction 10 39
3 Nursing 5 27
$James
Behaviour number duration
1 Aggression 21 30
2 Interaction 30 49
Note that James does not exhibit any nursing behaviour and therefore different number of rows between the two list elements.
I want to switch the list elements with the dataframe rows so that I have a list of behaviours and a dataframe of ID. So that it looks like this:
$Aggression
ID number duration
1 Dave 20 60
2 James 21 30
$Interaction
ID number duration
1 Dave 10 39
2 James 30 49
$Nursing
ID number duration
1 Dave 5 27
I thought that it could be achieved with reshape2::melt. I wasn't able to get further than melt(IDs, id = "Behaviour)
Any ideas?

Generally you can do it in two steps:
turning the list into a single data.frame/data.table
splitting it based on Behavior
You can do it like this, for example:
dt <- data.table::rbindlist(IDs, id = "ID")
# or: dt <- dplyr::bind_rows(IDs, .id = "ID")
split(dt, dt$Behaviour)
Note:
If you don't want the Behaviour column in the result and you used the data.table approach, you can modify the split to:
split(dt[,!"Behaviour"], dt$Behaviour)

Try this:
tmp<-data.frame(ID=rep(names(IDs),vapply(IDs,nrow,1L)),do.call(rbind,IDs),row.names=NULL)
split(tmp[-2],tmp$Behaviour)
#$Aggression
# ID number duration
#1 Dave 20 60
#4 James 21 30
#$Interaction
# ID number duration
#2 Dave 10 39
#5 James 30 49
#$Nursing
# ID number duration
#3 Dave 5 27
#6 James 1 17

Or using base R
d1 <- do.call(rbind, Map(cbind, id = names(IDs), IDs))
split(d1, d1$Behaviour)

Related

Filter by Condition occurring Consecutively in R

I'm hoping to see if there is a dplyr solution to this problem as I'm building a survival dataset.
I am looking to create my 'event' coding that would satisfy a particular condition if it occurs twice consecutively. In this case, the event condition would be if Var was > 21 for two consecutive dates. For example, in the following dataset:
ID Date Var
1 1/1/20 22
1 1/3/20 23
2 1/2/20 23
2 2/10/20 18
2 2/16/20 21
3 12/1/19 16
3 12/6/19 14
3 12/20/19 22
In this case, patient 1 should remain, and patient 2 and 3 should be filtered out because > 21 did not happen consecutively, and then i'd like to simply take the maximum date by each ID so that I can easily calculate the time to the event.
Final result:
ID Date Var
1 1/3/20 23
Thank you
As long as the dates are sorted (latest date is later in the table) this should work. However, this is in data.table since I dont use dplyr that much, however it should be pretty similar.
library(data.table)
setDT(df)
df = df[Var > 21 & shift(Var > 21, n = -1), ]
df = unique(df, by = "ID", fromLast = T)

Only changing a single variable in R

I have a dataframe df:
Group Age Sales
A1234 12 1000
A2312 11 900
B2100 23 2100
...
I intend to create a new dataframe through the modification of the Group variable, by only taking the substring of Group. At present, I am able to execute it in 2 steps:
dt1<- dt
dt1$Group<- substr(dt$Group,1,2)
Is it able to do the above in one single command? I guess the following would get tedious if I have to create and transform many intermediate dataframes along the way.
You can try:
dt1<-`$<-`(dt,"Group",substr(dt$Group,1,2))
dt1
# Group Age Sales
#1 A1 12 1000
#2 A2 11 900
#3 B2 23 2100
dt
# Group Age Sales
#1 A1234 12 1000
#2 A2312 11 900
#3 B2100 23 2100
The original table is unchanged and you get the new one with a single line.

How to run a loop in R to find a unique combination of numbers within a range of 7?

I have a dataset which looks something like this:-
Key Days
A 1
A 2
A 3
A 8
A 9
A 36
A 37
B 14
B 15
B 44
B 45
I would like to split the individual keys based on the days in groups of 7. For e.g.:-
Key Days
A 1
A 2
A 3
Key Days
A 8
A 9
Key Days
A 36
A 37
Key Days
B 14
B 15
Key Days
B 44
B 45
I could use ifelse and specify buckets of 1-7, 7-14 etc until 63-70 (max possible value of days). However the issue lies with the days column. There are lots of cases wherein there is an overlap in days - Take days 14-15 as an example which would fall into 2 brackets if split using the ifelse logic (7-14 & 15-21).
The ideal method of splitting this would be to identify a day and add 7 to it and check how many rows of data are actually falling under that category. I think we need to use loops for this. I could do it in excel but i have 20000 rows of data for 2000 keys hence i'm using R. I would need a loop which checks each key value and for each key it further checks the value of days and buckets them in group of 7 by checking the first day value of each range.
We create a grouping variable by applying %/% on the 'Day' column and then split the dataset into a list based on that 'grp'.
grp <- df$Day %/%7
split(df, factor(grp, levels = unique(grp)))
#$`0`
# Key Days
#1 A 1
#2 A 2
#3 A 3
#$`1`
# Key Days
#4 A 8
#5 A 9
#$`5`
# Key Days
#6 A 36
#7 A 37
#$`2`
# Key Days
#8 B 14
#9 B 15
#$`6`
# Key Days
#10 B 44
#11 B 45
Update
If we need to split by 'Key' also
lst <- split(df, list(factor(grp, levels = unique(grp)), df$Key), drop=TRUE)

R - How can I find a duplicated line based in one Column and add extra text in that duplicated value?

I'am looking for a easy solution, instead of doing several steps.
I have a data frame with 36 variables with almost 3000 lines, one of vars is a char type with names. They must be unique. I need to find the rows with the same name, and the add "duplicated" in the text. I can't delete the duplicated because it is from a relational data base and I'll need that row ID for others operations.
I can find the duplicated rows and them rename the text manually. But that implies in finding the duplicated, record the row ID and them replace the text name manually.
Is there a way to automatically add the extra text to the duplicated names? I'am still new to R and have a hard time making auto condition based functions.
It would be something like this:
From this:
ID name age sex
1 John 18 M
2 Mary 25 F
3 Mary 19 F
4 Ben 21 M
5 July 35 F
To this:
ID name age sex
1 John 18 M
2 Mary 25 F
3 Mary - duplicated 19 F
4 Ben 21 M
5 July 35 F
Could you guys shed some light?
Thank you very much.
Edit: the comment about adding a column is probably the best thing to do, but if you really want to do what you're suggesting...
The duplicated function will identify duplicates. Then, you just need to use paste to apply the append.
df <- data.frame(
ID = 1:5,
name = c('John', 'Mary', 'Mary', 'Ben', 'July'),
age = c(18, 25, 19, 21, 35),
sex = c('M', 'F', 'F', 'M', 'F'),
stringsAsFactors = FALSE)
# Add "-duplicated" to every duplicated value (following Laterow's comment)
dup <- duplicated(df$name)
df$name[dup] <- paste0(df$name[dup], '-duplicated')
df
ID name age sex
1 1 John 18 M
2 2 Mary 25 F
3 3 Mary-duplicated 19 F
4 4 Ben 21 M
5 5 July 35 F

Assign rows to a group based on spatial neighborhood and temporal criteria in R

I have an issue that I just cannot seem to sort out. I have a dataset that was derived from a raster in arcgis. The dataset represents every fire occurrence during a 10-year period. Some raster cells had multiple fires within that time period (and, thus, will have multiple rows in my dataset) and some raster cells will not have had any fire (and, thus, will not be represented in my dataset). So, each row in the dataset has a column number (sequential integer) and a row number assigned to it that corresponds with the row and column ID from the raster. It also has the date of the fire.
I would like to assign a unique ID (fire_ID) to all of the fires that are within 4 days of each other and in adjacent pixels from one another (within the 8-cell neighborhood) and put this into a new column.
To clarify, if there were an observation from row 3, col 3, Jan 1, 2000 and another from row 2, col 4, Jan 4, 2000, those observations would be assigned the same fire_ID.
Below is a sample dataset with "rows", which are the row IDs of the raster, "cols", which are the column IDs of the raster, and "dates" which are the dates the fire was detected.
rows<-sample(seq(1,50,1),600, replace=TRUE)
cols<-sample(seq(1,50,1),600, replace=TRUE)
dates<-sample(seq(from=as.Date("2000/01/01"), to=as.Date("2000/02/01"), by="day"),600, replace=TRUE)
fire_df<-data.frame(rows, cols, dates)
I've tried sorting the data by "row", then "column", then "date" and looping through, to create a new fire_ID if the row and column ID were within one value and the date was within 4 days, but this obviously doesn't work, as fires which should be assigned the same fire_ID are assigned different fire_IDs if there are observations in between them in the list that belong to a different fire_ID.
fire_df2<-fire_df[order(fire_df$rows, fire_df$cols, fire_df$date),]
fire_ID=numeric(length=nrow(fire_df2))
fire_ID[1]=1
for (i in 2:nrow(fire_df2)){
fire_ID[i]=ifelse(
fire_df2$rows[i]-fire_df2$rows[i-1]<=abs(1) & fire_df2$cols[i]-fire_df2$cols[i-1]<=abs(1) & fire_df2$date[i]-fire_df2$date[i-1]<=abs(4),
fire_ID[i-1],
i)
}
length(unique(fire_ID))
fire_df2$fire_ID<-fire_ID
Please let me know if you have any suggestions.
I think this task requires something along the lines of hierarchical clustering.
Note, however, that there will be necessarily some degree of arbitrariness in the ids. This is because it is entirely possible that the cluster of fires itself is longer than 4 days yet every fire is less than 4 days away from some other fire in that cluster (and thus should have the same id).
library(dplyr)
# Create the distances
fire_dist <- fire_df %>%
# Normalize dates
mutate( norm_dates = as.numeric(dates)/4) %>%
# Only keep the three variables of interest
select( rows, cols, norm_dates ) %>%
# Compute distance using L-infinite-norm (maximum)
dist( method="maximum" )
# Do hierarchical clustering with "single" aggl method
fire_clust <- hclust(fire_dist, method="single")
# Cut the tree at height 1 and obtain groups
group_id <- cutree(fire_clust, h=1)
# First attach the group ids back to the data frame
fire_df2 <- cbind( fire_df, group_id ) %>%
# Then sort the data
arrange( group_id, dates, rows, cols )
# Print the first 20 records
fire_df2[1:10,]
(Make sure you have dplyr library installed. You can run install.packages("dplyr",dep=TRUE) if not installed. It is a really good and very popular library for data manipulations)
A couple of simple tests:
Test #1. The same forest fire moving.
rows<-1:6
cols<-1:6
dates<-seq(from=as.Date("2000/01/01"), to=as.Date("2000/01/06"), by="day")
fire_df<-data.frame(rows, cols, dates)
gives me this:
rows cols dates group_id
1 1 1 2000-01-01 1
2 2 2 2000-01-02 1
3 3 3 2000-01-03 1
4 4 4 2000-01-04 1
5 5 5 2000-01-05 1
6 6 6 2000-01-06 1
Test #2. 6 different random forest fires.
set.seed(1234)
rows<-sample(seq(1,50,1),6, replace=TRUE)
cols<-sample(seq(1,50,1),6, replace=TRUE)
dates<-sample(seq(from=as.Date("2000/01/01"), to=as.Date("2000/02/01"), by="day"),6, replace=TRUE)
fire_df<-data.frame(rows, cols, dates)
output:
rows cols dates group_id
1 6 1 2000-01-10 1
2 32 12 2000-01-30 2
3 31 34 2000-01-10 3
4 32 26 2000-01-27 4
5 44 35 2000-01-10 5
6 33 28 2000-01-09 6
Test #3: one expanding forest fire
dates <- seq(from=as.Date("2000/01/01"), to=as.Date("2000/01/06"), by="day")
rows_start <- 50
cols_start <- 50
fire_df <- data.frame(dates = dates) %>%
rowwise() %>%
do({
diff = as.numeric(.$dates - as.Date("2000/01/01"))
expand.grid(rows=seq(rows_start-diff,rows_start+diff),
cols=seq(cols_start-diff,cols_start+diff),
dates=.$dates)
})
gives me:
rows cols dates group_id
1 50 50 2000-01-01 1
2 49 49 2000-01-02 1
3 49 50 2000-01-02 1
4 49 51 2000-01-02 1
5 50 49 2000-01-02 1
6 50 50 2000-01-02 1
7 50 51 2000-01-02 1
8 51 49 2000-01-02 1
9 51 50 2000-01-02 1
10 51 51 2000-01-02 1
and so on. (All records identified correctly to belong to the same forest fire.)

Resources