I have a dataframe testData which is made up of many unique ids. My objective is to identify whether or not the ids contain all of the possible integers in the range of month, yday, and week where the min is the first value per id and max is the max value in the entire range of the column
Please note this is different from the related question here
In other words, if id has all possible values in the range in month, then it should receive a t. For example, under month where id = 1, the min value is 2 and the max value for the whole column is 5, therefore 1 should receive a true because there is a value 2, 3, 4, and 5. Where id = 2, however, there are only values 1, 2, 4, and 5, so the 3 was skipped and therefore 2 should receive an f.
So far, I have a formula that takes all the values in the entire range of the column (but NOT the min value per id):
library(data.table)
setDT(testData)
output<-testData[,.(month=all(unique(testData$month)%in%.SD$month),yday=all(unique(testData$yday)%in%.SD$yday),week=all(unique(testData$week)%in%.SD$week)),by=(id)]
Any idea how I could integrate min where min is the minimum value per id and max is the maximum value in the range?
> testData
id month yday week
1 1 2 1 1
2 3 1 2 1
3 4 1 3 1
4 2 1 4 1
5 3 3 5 2
6 4 3 6 3
7 2 2 7 1
8 3 1 8 3
9 1 2 9 2
10 5 4 10 3
11 3 2 11 1
12 4 4 12 1
13 5 4 13 2
14 1 3 14 3
15 1 4 15 1
16 1 5 16 2
17 2 4 17 3
18 2 5 18 1
19 5 5 19 1
> dput(testData)
structure(list(id = c(1L, 3L, 4L, 2L, 3L, 4L, 2L, 3L, 1L, 5L,
3L, 4L, 5L, 1L, 1L, 1L, 2L, 2L, 5L), month = c(2L, 1L, 1L, 1L,
3L, 3L, 2L, 1L, 2L, 4L, 2L, 4L, 4L, 3L, 4L, 5L, 4L, 5L, 5L),
yday = 1:19, week = c(1L, 1L, 1L, 1L, 2L, 3L, 1L, 3L, 2L,
3L, 1L, 1L, 2L, 3L, 1L, 2L, 3L, 1L, 1L)), .Names = c("id",
"month", "yday", "week"), class = "data.frame", row.names = c(NA,
-19L))
In the end, the output should look like this:
> output
id month yday week
1 1 t f t
2 2 f f f
3 3 f f t
4 4 f f f
5 5 t f t
Using dplyr you can group by id and then just check whether all elements of the range are in the values present for each group. Note that min(month) gives the min for the grouped id variable, but max(testData$month) gives the max for the whole list.
library(dplyr)
tD2 <- testData %>% group_by(id) %>%
summarise(month=all(min(month):max(testData$month) %in% month),
yday=all(min(yday):max(testData$yday) %in% yday),
week=all(min(week):max(testData$week) %in% week))
tD2
# A tibble: 5 × 4
id month yday week
<int> <lgl> <lgl> <lgl>
1 1 TRUE FALSE TRUE
2 2 FALSE FALSE FALSE
3 3 FALSE FALSE TRUE
4 4 FALSE FALSE FALSE
5 5 TRUE FALSE TRUE
Related
I have some sequence event data for which I want to plot the trend of missingness on value across time. Example below:
id time value
1 aa122 1 1
2 aa2142 1 1
3 aa4341 1 1
4 bb132 1 2
5 bb2181 2 1
6 bb3242 2 3
7 bb3321 2 NA
8 cc122 2 1
9 cc2151 2 2
10 cc3241 3 1
11 dd161 3 3
12 dd2152 3 NA
13 dd3282 3 NA
14 ee162 3 1
15 ee2201 4 2
16 ee3331 4 NA
17 ff1102 4 NA
18 ff2141 4 NA
19 ff3232 5 1
20 gg142 5 3
21 gg2192 5 NA
22 gg3311 5 NA
23 gg4362 5 NA
24 ii111 5 NA
The NA suppose to increase over time (the behaviors are fading). How do I plot the NA across time
I think this is what you're looking for? You want to see how many NA's appear over time. Assuming this is correct, if each time is a group, then you can count the number of NA's appear in each group
data:
df <- structure(list(id = structure(1:24, .Label = c("aa122", "aa2142",
"aa4341", "bb132", "bb2181", "bb3242", "bb3321", "cc122", "cc2151",
"cc3241", "dd161", "dd2152", "dd3282", "ee162", "ee2201", "ee3331",
"ff1102", "ff2141", "ff3232", "gg142", "gg2192", "gg3311", "gg4362",
"ii111"), class = "factor"), time = c(1L, 1L, 1L, 1L, 2L, 2L,
2L, 2L, 2L, 3L, 3L, 3L, 3L, 3L, 4L, 4L, 4L, 4L, 5L, 5L, 5L, 5L,
5L, 5L), value = c(1L, 1L, 1L, 2L, 1L, 3L, NA, 1L, 2L, 1L, 3L,
NA, NA, 1L, 2L, NA, NA, NA, 1L, 3L, NA, NA, NA, NA)), class = "data.frame", row.names = c(NA,
-24L))
library(tidyverse)
library(ggplot2)
df %>%
group_by(time) %>%
summarise(sumNA = sum(is.na(value)))
# A tibble: 5 × 2
time sumNA
<int> <int>
1 1 0
2 2 1
3 3 2
4 4 3
5 5 4
You can then plot this using ggplot2
df %>%
group_by(time) %>%
summarise(sumNA = sum(is.na(value))) %>%
ggplot(aes(x=time)) +
geom_line(aes(y=sumNA))
As you can see, as time increases, the number of NA's also increases
Within a group, I want to find the difference between that row and the first time that user appeared in the data. For example, I need to create the diff variable below. Users have different number of rows each as in the following data:
df <- structure(list(ID = c(1L, 1L, 1L, 1L, 2L, 2L, 2L, 3L, 4L, 4L),
money = c(9L, 12L, 13L, 15L, 5L, 7L, 8L, 5L, 2L, 10L), occurence = c(1L,
2L, 3L, 4L, 1L, 2L, 3L, 1L, 1L, 2L), diff = c(NA, 3L, 4L,
6L, NA, 2L, 3L, NA, NA, 8L)), .Names = c("ID", "money", "occurence",
"diff"), class = "data.frame", row.names = c(NA, -10L))
ID money occurence diff
1 1 9 1 NA
2 1 12 2 3
3 1 13 3 4
4 1 15 4 6
5 2 5 1 NA
6 2 7 2 2
7 2 8 3 3
8 3 5 1 NA
9 4 2 1 NA
10 4 10 2 8
You can use ave(). We just remove the first value per group and replace it with NA, and subtract the first value from the rest of the values.
with(df, ave(money, ID, FUN = function(x) c(NA, x[-1] - x[1])))
# [1] NA 3 4 6 NA 2 3 NA NA 8
A dplyr solution, which uses the first function to get the first value and calculate the difference.
library(dplyr)
df2 <- df %>%
group_by(ID) %>%
mutate(diff = money - first(money)) %>%
mutate(diff = replace(diff, diff == 0, NA)) %>%
ungroup()
df2
# # A tibble: 10 x 4
# ID money occurence diff
# <int> <int> <int> <int>
# 1 1 9 1 NA
# 2 1 12 2 3
# 3 1 13 3 4
# 4 1 15 4 6
# 5 2 5 1 NA
# 6 2 7 2 2
# 7 2 8 3 3
# 8 3 5 1 NA
# 9 4 2 1 NA
# 10 4 10 2 8
Update
Here is a data.table solution provided by Sotos. Notice that no need to replace 0 with NA.
library(data.table)
setDT(df)[, money := money - first(money), by = ID][]
# ID money occurence diff
# 1: 1 0 1 NA
# 2: 1 3 2 3
# 3: 1 4 3 4
# 4: 1 6 4 6
# 5: 2 0 1 NA
# 6: 2 2 2 2
# 7: 2 3 3 3
# 8: 3 0 1 NA
# 9: 4 0 1 NA
# 10: 4 8 2 8
DATA
dput(df)
structure(list(ID = c(1L, 1L, 1L, 1L, 2L, 2L, 2L, 3L, 4L, 4L),
money = c(9L, 12L, 13L, 15L, 5L, 7L, 8L, 5L, 2L, 10L), occurence = c(1L,
2L, 3L, 4L, 1L, 2L, 3L, 1L, 1L, 2L)), .Names = c("ID", "money",
"occurence"), row.names = c(NA, -10L), class = "data.frame")
I have a dataframe testData which is made up of many unique ids. My objective is to identify whether or not the ids contain all of the numbers in the range of month, yday, and week
In other words, if id has all possible values in the range in month, then it should receive a t. If id has all possible values in the range in yday, it should receive a t, and if id has all possible values in the range in week, it should receive a t. Otherwise, it should receive an f
A sample of the data looks like this:
> testData
id month yday week
1 1 1 1 1
2 3 1 2 1
3 4 1 3 1
4 2 1 4 1
5 3 3 5 1
6 4 1 6 1
7 2 1 7 1
8 3 1 8 2
9 1 1 9 2
10 5 1 10 2
11 3 2 11 1
12 4 1 12 1
13 5 1 13 1
14 1 1 14 1
The output should look something like this:
> output
id month yday week
1 1 f f t
2 2 f f f
3 3 t f t
4 4 f f f
5 5 f f t
I know that one can check if a numbers are within a certain range with findInterval(), but could someone suggest a method to check if numbers in a vector contain all integers within a range?
> dput(testData)
structure(list(id = c(1L, 3L, 4L, 2L, 3L, 4L, 2L, 3L, 1L, 5L,
3L, 4L, 5L, 1L), month = c(1L, 1L, 1L, 1L, 3L, 1L, 1L, 1L, 1L,
1L, 2L, 1L, 1L, 1L), yday = 1:14, week = c(1L, 1L, 1L, 1L, 1L,
1L, 1L, 2L, 2L, 2L, 1L, 1L, 1L, 1L)), .Names = c("id", "month",
"yday", "week"), class = "data.frame", row.names = c(NA, -14L
))
Easy with data.table
library(data.table)
setDT(testdata)
output<-testdata[,.(month=all(unique(testdata$month)%in%.SD$month),yday=all(unique(testdata$yday)%in%.SD$yday),Week=all(unique(testdata$week)%in%.SD$week)),by=(id)]
output
id month yday Week
1: 1 FALSE FALSE TRUE
2: 2 FALSE FALSE FALSE
3: 3 TRUE FALSE TRUE
4: 4 FALSE FALSE FALSE
5: 5 FALSE FALSE TRUE
Here's how to do it with dplyr:
library(dplyr)
testData_copy <-testData
testData %>%
group_by(id) %>%
summarise(month=n_distinct(month)== n_distinct(testData_copy$month),
yday =n_distinct(yday) == n_distinct(testData_copy$yday),
week =n_distinct(week) == n_distinct(testData_copy$week)
)
# A tibble: 5 × 4
id month yday week
<int> <lgl> <lgl> <lgl>
1 1 FALSE FALSE TRUE
2 2 FALSE FALSE FALSE
3 3 TRUE FALSE TRUE
4 4 FALSE FALSE FALSE
5 5 FALSE FALSE TRUE
I would like to identify sequential 24 hour periods in GPS data. I have a datetime column that is numerical (ex: 41422.29) and I know each rounded number is a day. I know how to get the day (just round), however my schedule does not specifically follow days. Instead, I would specifically like to identify all of the columns that are within 24 hours from the first column, and then go from there. I can not use a count of columns, as 24 hours is not divided into equal increments.
This is my logic so far, though it doesn't get me where I need to be:
for (i in 1:length(example)){
base<-round(example$DT_LMT[i], digits=0)
if(example$DT_LMT[i]<=base+1) {
example$DaySeq<-base
}
else {
base+1
}
}
I have a dummy data set example, with the kind of thing I would like:
structure(list(ID = 1:19, DT_LMT = c(41423.62517, 41423.79236,
41423.95868, 41424.12534, 41424.29203, 41424.45888, 41424.62535,
41424.79186, 41424.95852, 41425.12502, 41425.29185, 41425.75016,
41425.79201, 41425.83352, 41425.87534, 41425.91744, 41425.95868,
41426.00105, 41426.04257), NEED = c(1L, 1L, 1L, 1L, 1L, 1L, 2L,
2L, 2L, 2L, 2L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L)), .Names = c("ID",
"DT_LMT", "NEED"), class = "data.frame", row.names = c(NA, -19L
))
Here is one approach, assuming df is the data assigned in your question. I created a new variable, need which I believe is your desired outcome.
transform(df, need = trunc(DT_LMT - DT_LMT[1]) + 1)
I would add 1 to the first value as the filter the data frame.
data<-data.frame(ID = 1:19, DT_LMT = c(41423.62517, 41423.79236,
41423.95868, 41424.12534, 41424.29203, 41424.45888, 41424.62535,
41424.79186, 41424.95852, 41425.12502, 41425.29185, 41425.75016,
41425.79201, 41425.83352, 41425.87534, 41425.91744, 41425.95868,
41426.00105, 41426.04257), NEED = c(1L, 1L, 1L, 1L, 1L, 1L, 2L,
2L, 2L, 2L, 2L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L))
data[data$DT_LMT<=data$DT_LMT[1]+1,]
Output:
ID DT_LMT NEED
1 1 41423.63 1
2 2 41423.79 1
3 3 41423.96 1
4 4 41424.13 1
5 5 41424.29 1
6 6 41424.46 1
If you want to split the data into a list by 24 hour period.
split(data,unlist(lapply(data$DT_LMT,function(x){floor(x-data$DT_LMT[1])})))
Output:
$`0`
ID DT_LMT NEED
1 1 41423.63 1
2 2 41423.79 1
3 3 41423.96 1
4 4 41424.13 1
5 5 41424.29 1
6 6 41424.46 1
$`1`
ID DT_LMT NEED
7 7 41424.63 2
8 8 41424.79 2
9 9 41424.96 2
10 10 41425.13 2
11 11 41425.29 2
$`2`
ID DT_LMT NEED
12 12 41425.75 3
13 13 41425.79 3
14 14 41425.83 3
15 15 41425.88 3
16 16 41425.92 3
17 17 41425.96 3
18 18 41426.00 3
19 19 41426.04 3
To add a column with the day.
data$day<-lapply(data$DT_LMT,function(x){floor(x-data$DT_LMT[1])+1})
This question already has answers here:
How to sum a variable by group
(18 answers)
Closed 6 years ago.
I have the following data frame:
Event Scenario Year Cost
1 1 1 10
2 1 1 5
3 1 2 6
4 1 2 6
5 2 1 15
6 2 1 12
7 2 2 10
8 2 2 5
9 3 1 4
10 3 1 5
11 3 2 6
12 3 2 5
I need to produce a pivot table/ frame that will sum the total cost per year for each scenario. So the result will be.
Scenario Year Cost
1 1 15
1 2 12
2 1 27
2 2 15
3 1 9
3 2 11
I need to produce a ggplot line graph that plot the cost of each scenario per year. I know how to do that, I just can't get the right data frame.
Try
library(dplyr)
df %>% group_by(Scenario, Year) %>% summarise(Cost=sum(Cost))
Or
library(data.table)
setDT(df)[, list(Cost=sum(Cost)), by=list(Scenario, Year)]
Or
aggregate(Cost~Scenario+Year, df,sum)
data
df <- structure(list(Event = 1:12, Scenario = c(1L, 1L, 1L, 1L, 2L,
2L, 2L, 2L, 3L, 3L, 3L, 3L), Year = c(1L, 1L, 2L, 2L, 1L, 1L,
2L, 2L, 1L, 1L, 2L, 2L), Cost = c(10L, 5L, 6L, 6L, 15L, 12L,
10L, 5L, 4L, 5L, 6L, 5L)), .Names = c("Event", "Scenario", "Year",
"Cost"), class = "data.frame", row.names = c(NA, -12L))
The following does it:
library(plyr)
ddply(df, .(Scenario, Year), summarize, Cost = sum(Cost))
#Scenario Year Cost
#1 1 1 15
#2 1 2 12
#3 2 1 27
#4 2 2 15
#5 3 1 9
#6 3 2 11