I have collected a dataframe that models the duration of time for events in a group problem solving session in which the members Communicate (Discourse Code) and construct models (Modeling Code). Each minute that that occurs is captured in the Time_Processed column. Technically these events occur simultaneously. I would like to know how long the students are constructing each type of model which is the total duration of that model or the time elapsed before that model changes.
I have the following dataset:
Looks like this:
`Modeling Code` `Discourse Code` Time_Processed
<fct> <fct> <dbl>
1 OFF OFF 10.0
2 MA Q 11.0
3 MA AG 16.0
4 V S 18.0
5 V Q 20.0
6 MA C 21.0
7 MA C 23.0
8 MA C 25.0
9 V J 26.0
10 P S 28.0
# My explicit dataframe.
df <- structure(list(`Modeling Code` = structure(c(3L, 2L, 2L, 6L,
6L, 2L, 2L, 2L, 6L, 4L), .Label = c("A", "MA", "OFF", "P", "SM",
"V"), class = "factor"), `Discourse Code` = structure(c(7L, 8L,
1L, 9L, 8L, 2L, 2L, 2L, 6L, 9L), .Label = c("AG", "C", "D", "DA",
"G", "J", "OFF", "Q", "S"), class = "factor"), Time_Processed = c(10,
11, 16, 18, 20, 21, 23, 25, 26, 28)), row.names = c(NA, -10L), .Names = c("Modeling Code",
"Discourse Code", "Time_Processed"), class = c("tbl_df", "tbl",
"data.frame"))
For this dataframe I can find how often the students were constructing each type of model logically like this.
With Respect to the Modeling Code and Time_Processed columns,
At 10 minutes they are using the OFF model method, then at 11 minutes, they change the model so the duration of the OFF model is (11 - 10) minutes = 1 minute. There are no other occurrences of the "OFF" method so the duration of OFF = 1 min.
Likewise, for Modeling Code method "MA", the model is used from 11 minutes to 16 minutes (duration = 5 minutes) and then from 16 minutes to 18 minutes before the model changes to V with (duration = 2 minutes), then the model is used again at 21 minutes and ends at 26 minutes (duration = 5 minutes). So the total duration of "MA" is (5 + 2 + 5) minutes = 12 minutes.
Likewise the duration of Modeling Code method "V" starts at 18 minutes, ends at 21 minutes (duration = 3 minutes), resumes at 26 minutes, ends at 28 minutes (duration = 2) minutes. So total duration of "V" is 3 + 2 = 5 minutes.
Then the duration of Modeling Code P, starts at 28 minutes and there is no continuity so total duration of P is 0 minutes.
So the total duration (minutes) table of the Modeling Codes is this:
Modeling Code Total_Duration
OFF 1
MA 12
V 5
P 0
This models a barchart that looks like this:
How can the total duration of these modeling methods be constructed?
It would also be nice to know the duration of the combinations
such that the only visible combination in this small subset happens to be Modeling Code "MA" paired with Discourse Code "C" and this occurs for 26 - 21 = 5 minutes.
Thank you.
UPDATED SOLUTION
df %>%
mutate(dur = lead(Time_Processed) - Time_Processed) %>%
replace_na(list(dur = 0)) %>%
group_by(`Modeling Code`) %>%
summarise(tot_time = sum(dur))
(^ Thanks to Nick DiQuattro)
PREVIOUS SOLUTION
Here's one solution that creates a new variable, mcode_grp, which keeps track of discrete groupings of the same Modeling Code. It's not particularly pretty - it requires looping over each row in df - but it works.
First, rename columns for ease of reference:
df <- df %>%
rename(m_code = `Modeling Code`,
d_code = `Discourse Code`)
We'll update df with a few extra variables.
- lead_time_proc gives us the Time_Processed value for the next row in df, which we'll need when computing the total amount of time for each m_code batch
- row_n for keeping track of row number in our iteration
- mcode_grp is the unique label for each m_code batch
df <- df %>%
mutate(lead_time_proc = lead(Time_Processed),
row_n = row_number(),
mcode_grp = "")
Next, we need a way to keep track of when we've hit a new batch of a given m_code value. One way is to keep a counter for each m_code, and increment it whenever a new batch is reached. Then we can label all the rows for that m_code batch as belonging to the same time window.
mcode_ct <- df %>%
group_by(m_code) %>%
summarise(ct = 0) %>%
mutate(m_code = as.character(m_code))
This is the ugliest part. We loop over every row in df, and check to see if we've reached a new m_code. If so, we update accordingly, and register a value for mcode_grp for each row.
mc <- ""
for (i in 1:nrow(df)) {
current_mc <- df$m_code[i]
if (current_mc != mc) {
mc <- current_mc
mcode_ct <- mcode_ct %>% mutate(ct = ifelse(m_code == mc, ct + 1, ct))
current_grp <- mcode_ct %>% filter(m_code == mc) %>% select(ct) %>% pull()
}
df <- df %>% mutate(mcode_grp = ifelse(row_n == i, current_grp, mcode_grp))
}
Finally, group_by m_code and mcode_grp, compute the duration for each batch, and then sum over m_code values.
df %>%
group_by(m_code, mcode_grp) %>%
summarise(start_time = min(Time_Processed),
end_time = max(lead_time_proc)) %>%
mutate(total_time = end_time - start_time) %>%
group_by(m_code) %>%
summarise(total_time = sum(total_time)) %>%
replace_na(list(total_time=0))
Output:
# A tibble: 4 x 2
m_code total_time
<fct> <dbl>
1 MA 12.
2 OFF 1.
3 P 0.
4 V 5.
For any dplyr/tidyverse experts out there, I'd love tips on how to accomplish more of this without resorting to loops and counters!
Related
I am trying to write a loop in R that creates a new variable based on a table of conditional outcomes.
I have four treatment groups (A, B, C, D). Each treatment group pays a different price at three different time periods (day, dinner, night).
Treatment Group Day Price Dinnertime Price Night Price
A 10 20 7
B 11 25 8
C 12 30 9
D 13 35 10
The time period is recorded as a given "hour" (day is hours 8-17, dinner is from 17-19 and night is from 19-0 and 0-8).
Hour Usage
Person 1 1 0
Person 1 2 0
Person 2 20 5
Person 3 17 6
Based on both treatment group (A, B, C and D) and time of day (night, day, dinnertime), I would like to create a new vector of prices.
Ideally, I would create dummy variables for each of the time periods (day, night and dinner) based on these hourly conditions. However, my data set is pretty large (24 observations per person per day) so I'm looking for a more elegant solution.
In plain language, I want this:
if group==A & time==night, then price=7 --> and this information saved in a new variable "price"
Any advice?
Edit: Question is about the loop with two conditions. Is there a way to refer this directly to the data-frame with the treatment groups and tariffs or do I just need to write it manually?
Assuming that you have some way of including a column for the group each person belongs to in the dataframe with the transactions on it. Then something like this may work for you.
df.pricing <- structure(list(Treatment.Group = c("A", "B", "C", "D"), Day.Price = 10:13,
Dinnertime.Price = c(20L, 25L, 30L, 35L), Night.Price = 7:10),
.Names = c("Treatment.Group", "Day.Price", "Dinnertime.Price", "Night.Price"),
class = "data.frame",
row.names = c(NA, -4L))
df.transactions <- structure(list(Person = c("Person1", "Person1", "Person2", "Person3", "Person4"),
Hour = c(1L, 2L, 20L, 17L, 9L),
Usage = c(0L, 0L, 5L, 6L, 2L)),
.Names = c("Person", "Hour", "Usage"),
class = "data.frame", row.names = c(NA, -5L))
# Add the group that each person belongs to
df.transactions$group <- c("A","A","B","C","D")
# Get the transaction price
df.transactions$price <- apply(df.transactions, 1, function(x){
hour <- as.numeric(x[["Hour"]])
price <- ifelse(hour >= 8 & hour <= 16, df.pricing[df.pricing$Treatment.Group == x[["group"]], "Day.Price"],
ifelse((hour > 16 & hour <= 18), df.pricing[df.pricing$Treatment.Group == x[["group"]], "Dinnertime.Price"],
df.pricing[df.pricing$Treatment.Group == x[["group"]], "Night.Price"]))})
I am working with a dataset of hourly temperatures and I need to calculate "degree hours" above a heat threshold for each extreme event. I intend to run stats on the intensities (combined magnitude and duration) of each event to compare multiple sites over the same time period.
Example of data:
Temp
1 14.026
2 13.714
3 13.25
.....
21189 12.437
21190 12.558
21191 12.703
21192 12.896
Data after selecting only hours above the threshold of 18 degrees and then subtracting 18 to reveal degrees above 18:
Temp
5297 0.010
5468 0.010
5469 0.343
5470 0.081
5866 0.010
5868 0.319
5869 0.652
After this step I need help to sum consecutive hours during which the reading exceeded my specified threshold.
What I am hoping to produce out of above sample:
Temp
1 0.010
2 0.434
3 0.010
4 0.971
I've debated manipulating these data within a time series or by adding additional columns, but I do not want multiple rows for each warming event. I would immensely appreciate any advice.
This is an alternative solution in base R.
You have some data that walks around, and you want to sum up the points above a cutoff. For example:
set.seed(99999)
x <- cumsum(rnorm(30))
plot(x, type='b')
abline(h=2, lty='dashed')
which looks like this:
First, we want to split the data in to groups based on when they cross the cutoff. We can use run length encoding on the indicator to get a compressed version:
x.rle <- rle(x > 2)
which has the value:
Run Length Encoding
lengths: int [1:8] 5 2 3 1 9 4 5 1
values : logi [1:8] FALSE TRUE FALSE TRUE FALSE TRUE ...
The first group is the first 5 points where x > 2 is FALSE; the second group is the two following points, and so on.
We can create a group id by replacing the values in the rle object, and then back transforming:
x.rle$values <- seq_along(x.rle$values)
group <- inverse.rle(x.rle)
Finally, we aggregate by group, keeping only the data above the cut off:
aggregate(x~group, subset = x > 2, FUN=sum)
Which produces:
group x
1 2 5.113291213
2 4 2.124118005
3 6 11.775435706
4 8 2.175868979
I'd use data.table for this, although there are certainly other ways.
library( data.table )
setDT( df )
temp.threshold <- 18
First make a column showing the previous value from each one in your data. This will help to find the point at which the temperature rose above your threshold value.
df[ , lag := shift( Temp, fill = 0, type = "lag" ) ]
Now use that previous value column to compare with the Temp column. Mark every point at which the temperature rose above the threshold with a 1, and all other points as 0.
df[ , group := 0L
][ Temp > temp.threshold & lag <= temp.threshold, group := 1L ]
Now we can get cumsum of that new column, which will give each sequence after the temperature rose above the threshold its own group ID.
df[ , group := cumsum( group ) ]
Now we can get rid of every value not above the threshold.
df <- df[ Temp > temp.threshold, ]
And summarise what's left by finding the "degree hours" of each "group".
bygroup <- df[ , sum( Temp - temp.threshold ), by = group ]
I modified your input data a little to provide a couple of test events where the data rose above threshold:
structure(list(num = c(1L, 2L, 3L, 4L, 5L, 21189L, 21190L, 21191L,
21192L, 21193L, 21194L), Temp = c(14.026, 13.714, 13.25, 20,
19, 12.437, 12.558, 12.703, 12.896, 21, 21)), class = c("tbl_df",
"tbl", "data.frame"), row.names = c(NA, -11L), .Names = c("num",
"Temp"), spec = structure(list(cols = structure(list(num = structure(list(), class = c("collector_integer",
"collector")), Temp = structure(list(), class = c("collector_double",
"collector"))), .Names = c("num", "Temp")), default = structure(list(), class = c("collector_guess",
"collector"))), .Names = c("cols", "default"), class = "col_spec"))
With that data, here's the output of the code above (note $V1 is in "degree hours"):
> bygroup
group V1
1: 1 3
2: 2 6
I am trying to do a Stacked Columns Chart in R. Sorry but I am learning thats why i need help
This is how i have the data
structure(list(Category = structure(c(2L, 3L, 4L, 1L), .Label = c("MLC1000",
"MLC1051", "MLC1648", "MLC5726"), class = "factor"), Minutes = c(2751698L,
2478850L, 556802L, 2892097L), Items = c(684L, 607L, 135L, 711L
), Visits = c(130293L, 65282L, 25484L, 81216L), Sold = c(2625L,
1093L, 681L, 1802L)), .Names = c("Category", "Minutes", "Items",
"Visits", "Sold"), class = "data.frame", row.names = c(NA, -4L)
)
And i want to create this graphic
I think there are two pretty basic principles that you should apply to make this problem easier to handle. First, you should make your data tidy. Second, you shouldn't leave ggplot to do your calculations for you.
library(tidyverse)
a <- data_frame(
category = letters[1:4],
minutes = c(2751698, 2478850, 556802, 2892097),
visits = c(130293, 65282, 25484, 81216),
sold = c(2625, 1093, 681, 1802)
) %>%
gather(variable, value, -category) %>% # make tidy
group_by(variable) %>%
mutate(weight = value / sum(value)) # calculate weight variable
## Source: local data frame [12 x 4]
## Groups: variable [3]
## category variable value weight
## <chr> <chr> <dbl> <dbl>
## 1 a minutes 2751698 0.31703610
## 2 b minutes 2478850 0.28559999
## 3 c minutes 556802 0.06415178
## 4 d minutes 2892097 0.33321213
## 5 a visits 130293 0.43104127
## 6 b visits 65282 0.21596890
## 7 c visits 25484 0.08430734
## 8 d visits 81216 0.26868249
## 9 a sold 2625 0.42331882
## 10 b sold 1093 0.17626189
## 11 c sold 681 0.10982100
## 12 d sold 1802 0.29059829
I don't know what was up with your structure(), but I couldn't build a data frame from it without crashing my R session.
Once we get the data into this format, the ggplot2 call is actually really easy:
ggplot(a, aes(x = variable, weight = weight * 100, fill = category)) +
geom_bar()
As part of a project, I am currently using R to analyze some data. I am currently stuck with the retrieving few values from the existing dataset which i have imported from a csv file.
The file looks like:
For my analysis, I wanted to create another column which is the subtraction of the current value of x and its previous value. But the first value of every unique i, x would be the same value as it is currently. I am new to R and i was trying various ways for sometime now but still not able to figure out a way to do so. Request your suggestions in the approach that I can follow to achieve this task.
Mydata structure
structure(list(t = 1:10, x = c(34450L, 34469L, 34470L, 34483L,
34488L, 34512L, 34530L, 34553L, 34575L, 34589L), y = c(268880.73342868,
268902.322359863, 268938.194698248, 268553.521856105, 269175.38273083,
268901.619719038, 268920.864512966, 269636.604121984, 270191.206593437,
269295.344751692), i = c(1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L,
1L)), .Names = c("t", "x", "y", "i"), row.names = c(NA, 10L), class = "data.frame")
You can use the package data.table to obtain what you want:
library(data.table)
setDT(MyData)[, x_diff := c(x[1], diff(x)), by=i]
MyData
# t x i x_diff
# 1: 1 34287 1 34287
# 2: 2 34789 1 502
# 3: 3 34409 1 -380
# 4: 4 34883 1 474
# 5: 5 34941 1 58
# 6: 6 34045 2 34045
# 7: 7 34528 2 483
# 8: 8 34893 2 365
# 9: 9 34551 2 -342
# 10: 10 34457 2 -94
Data:
set.seed(123)
MyData <- data.frame(t=1:10, x=sample(34000:35000, 10, replace=T), i=rep(1:2, e=5))
You can use the diff() function. If you want to add a new column to your existing data frame, the diff function will return a vector x-1 length of your current data frame though. so in your case you can try this:
# if your data frame is called MyData
MyData$newX = c(NA,diff(MyData$x))
That should input an NA value as the first entry in your new column and the remaining values will be the difference between sequential values in your "x" column
UPDATE:
You can create a simple loop by subsetting through every unique instance of "i" and then calculating the difference between your x values
# initialize a new dataframe
newdf = NULL
values = unique(MyData$i)
for(i in 1:length(values)){
data1 = MyData[MyData$i = values[i],]
data1$newX = c(NA,diff(data1$x))
newdata = rbind(newdata,data1)
}
# and then if you want to overwrite newdf to your original dataframe
MyData = newdf
# remove some variables
rm(data1,newdf,values)
I have a problem regarding results from an aggregate function in R. My aim is to select certain bird species from a data set and calculate the density
of observed individuals over the surveyed area. To that end, I took a subset of the main data file, then aggregated over area, calculating the
mean, and the number of individuals (represented by length of vector). Then I wanted to use the calculated mean area and number of individuals to
calculate density. That didn't work. The code I used is given below:
> head(data)
positionmonth positionyear quadrant Species Code sum_areainkm2
1 5 2014 1 Bar-tailed Godwit 5340 155.6562
2 5 2014 1 Bar-tailed Godwit 5340 155.6562
3 5 2014 1 Bar-tailed Godwit 5340 155.6562
4 5 2014 1 Bar-tailed Godwit 5340 155.6562
5 5 2014 1 Gannet 710 155.6562
6 5 2014 1 Bar-tailed Godwit 5340 155.6562
sub.gannet<-subset(data, species == "Gannet")
sub.gannet<-data.frame(sub.gannet)
x<-sub.gannet
aggr.gannet<-aggregate(sub.gannet$sum_areainkm2, by=list(sub.gannet$positionyear, sub.gannet$positionmonth, sub.gannet$quadrant, sub.gannet$Species, sub.gannet$Code), FUN=function(x) c(observed_area=mean(x), NoInd=length(x)))
names(aggr.gannet)<-c("positionyear", "positionmonth", "quadrant", "species", "code", "x")
aggr.gannet<-data.frame(aggr.gannet)
> aggr.gannet
positionyear positionmonth quadrant species code x.observed_area x.NoInd
1 2014 5 4 Gannet 710 79.8257 10.0000
density <- c(aggr.gannet$x.NoInd/aggr.gannet$x.observed_area)
aggr.gannet <- cbind(aggr.gannet, density)
Error in data.frame(..., check.names = FALSE) :
Arguments imply differing number of rows: 1, 0
> density
numeric(0)
> aggr.gannet$x.observed_area
NULL
> aggr.gannet$x.NoInd
NULL
R doesn't seem to view the results from the function (observed_area and NoInd) as numeric values in their own right. That was already apparent, when I couldn't give them a name each, but had to call them "x".
How can I calculate density under these circumstances? Or is there another way to aggregate with multiple functions over the same variable that will result in a usable output?
It's a quirk of aggregate with multiple aggregations that the resulting aggregations are stored in a list within the column related to the aggregated variable.
The easiest way to get rid of this is to go through an as.list before as.dataframe, which flattens the data structure.
aggr.gannet <- as.data.frame(as.list(aggr.gannet))
It will still use x as the name. The way I discovered to fix this is to use the formula interface to aggregate, so your aggregate would look more like
aggr.gannet<-aggregate(
sum_areainkm2 ~ positionyear + positionmonth +
quadrant + Species + Code,
data=sub.gannet,
FUN=function(x) c(observed_area=mean(x), NoInd=length(x)))
Walking it through (here I haven't taken the subset to illustrate the aggregation by species)
df <- structure(list(positionmonth = c(5L, 5L, 5L, 5L, 5L, 5L), positionyear = c(2014L, 2014L, 2014L, 2014L, 2014L, 2014L), quadrant = c(1L, 1L, 1L, 1L, 1L, 1L), Species = structure(c(1L, 1L, 1L, 1L, 2L, 1L), .Label = c("Bar-tailed Godwit", "Gannet"), class = "factor"), Code = c(5340L, 5340L, 5340L, 5340L, 710L, 5340L), sum_areainkm2 = c(155.6562, 155.6562, 155.6562, 155.6562, 155.6562, 155.6562)), .Names = c("positionmonth", "positionyear", "quadrant", "Species", "Code", "sum_areainkm2"), class = "data.frame", row.names = c(NA, -6L))
df.agg <- as.data.frame(as.list(aggregate(
sum_areainkm2 ~ positionyear + positionmonth +
quadrant + Species + Code,
data=df,
FUN=function(x) c(observed_area=mean(x), NoInd=length(x)))))
Which results in what you want:
> df.agg
positionyear positionmonth quadrant Species Code
1 2014 5 1 Gannet 710
2 2014 5 1 Bar-tailed Godwit 5340
sum_areainkm2.observed_area sum_areainkm2.NoInd
1 155.6562 1
2 155.6562 5
> names(df.agg)
[1] "positionyear" "positionmonth"
[3] "quadrant" "Species"
[5] "Code" "sum_areainkm2.observed_area"
[7] "sum_areainkm2.NoInd"
Obligatory note here, that dplyr and data.table are powerful libraries that allow doing this sort of aggregation very simply and efficiently.
dplyr
Dplyr has some strange syntax (the %>% operator), but ends up being quite readable, and allows chaining more complex operations
> require(dplyr)
> df %>%
group_by(positionyear, positionmonth, quadrant, Species, Code) %>%
summarise(observed_area=mean(sum_areainkm2), NoInd = n())
data.table
data.table has a more compact syntax and may be faster with large datasets.
dt[,
.(observed_area=mean(sum_areainkm2), NoInd=.N),
by=.(positionyear, positionmonth, quadrant, Species, Code)]