Reshaping a data frame with more than one measure variable - r

I'm using a data frame similar to this one:
df<-data.frame(student=c(rep(1,5),rep(2,5)), month=c(1:5,1:5),
quiz1p1=seq(20,20.9,0.1),quiz1p2=seq(30,30.9,0.1),
quiz2p1=seq(80,80.9,0.1),quiz2p2=seq(90,90.9,0.1))
print(df)
student month quiz1p1 quiz1p2 quiz2p1 quiz2p2
1 1 1 20.0 30.0 80.0 90.0
2 1 2 20.1 30.1 80.1 90.1
3 1 3 20.2 30.2 80.2 90.2
4 1 4 20.3 30.3 80.3 90.3
5 1 5 20.4 30.4 80.4 90.4
6 2 1 20.5 30.5 80.5 90.5
7 2 2 20.6 30.6 80.6 90.6
8 2 3 20.7 30.7 80.7 90.7
9 2 4 20.8 30.8 80.8 90.8
10 2 5 20.9 30.9 80.9 90.9
Describing grades received by students during five months – in two quizzes divided into two parts each.
I need to get the two quizzes into separate rows – so that each student in each month will have two rows, one for each quiz, and two columns – for each part of the quiz.
When I melt the table:
melt.data.frame(df, c("student", "month"))
I get the two parts of the quiz in separate lines too.
dcast(dfL,student+month~variable)
of course gets me right back where I started, and I can't find a way to cast the table back in to the required form.
Is there a way to make the melt command function something like:
melt.data.frame(df, measure.var1=c("quiz1p1","quiz2p1"),
measure.var2=c("quiz1p2","quiz2p2"))

Here's how you could do this with reshape(), from base R:
df2 <- reshape(df, direction="long",
idvar = 1:2, varying = list(c(3,5), c(4,6)),
v.names = c("p1", "p2"), times = c("quiz1", "quiz2"))
## Checking the output
rbind(head(df2, 3), tail(df2, 3))
# student month time p1 p2
# 1.1.quiz1 1 1 quiz1 20.0 30.0
# 1.2.quiz1 1 2 quiz1 20.1 30.1
# 1.3.quiz1 1 3 quiz1 20.2 30.2
# 2.3.quiz2 2 3 quiz2 80.7 90.7
# 2.4.quiz2 2 4 quiz2 80.8 90.8
# 2.5.quiz2 2 5 quiz2 80.9 90.9
You can also use column names (instead of column numbers) for idvar and varying. It's more verbose, but seems like better practice to me:
## The same operation as above, using just column *names*
df2 <- reshape(df, direction="long", idvar=c("student", "month"),
varying = list(c("quiz1p1", "quiz2p1"),
c("quiz1p2", "quiz2p2")),
v.names = c("p1", "p2"), times = c("quiz1", "quiz2"))

I think this does what you want:
#Break variable into two columns, one for the quiz and one for the part of the quiz
dfL <- transform(dfL, quiz = substr(variable, 1,5),
part = substr(variable, 6,7))
#Adjust your dcast call:
dcast(dfL, student + month + quiz ~ part)
#-----
student month quiz p1 p2
1 1 1 quiz1 20.0 30.0
2 1 1 quiz2 80.0 90.0
3 1 2 quiz1 20.1 30.1
...
18 2 4 quiz2 80.8 90.8
19 2 5 quiz1 20.9 30.9
20 2 5 quiz2 80.9 90.9

There was a very similar question asked about half a year ago, in which I wrote the following function:
melt.wide = function(data, id.vars, new.names) {
require(reshape2)
require(stringr)
data.melt = melt(data, id.vars=id.vars)
new.vars = data.frame(do.call(
rbind, str_extract_all(data.melt$variable, "[0-9]+")))
names(new.vars) = new.names
cbind(data.melt, new.vars)
}
You can use the function to "melt" your data as follows:
dfL <-melt.wide(df, id.vars=1:2, new.names=c("Quiz", "Part"))
head(dfL)
# student month variable value Quiz Part
# 1 1 1 quiz1p1 20.0 1 1
# 2 1 2 quiz1p1 20.1 1 1
# 3 1 3 quiz1p1 20.2 1 1
# 4 1 4 quiz1p1 20.3 1 1
# 5 1 5 quiz1p1 20.4 1 1
# 6 2 1 quiz1p1 20.5 1 1
tail(dfL)
# student month variable value Quiz Part
# 35 1 5 quiz2p2 90.4 2 2
# 36 2 1 quiz2p2 90.5 2 2
# 37 2 2 quiz2p2 90.6 2 2
# 38 2 3 quiz2p2 90.7 2 2
# 39 2 4 quiz2p2 90.8 2 2
# 40 2 5 quiz2p2 90.9 2 2
Once the data are in this form, you can much more easily use dcast() to get whatever form you desire. For example
head(dcast(dfL, student + month + Quiz ~ Part))
# student month Quiz 1 2
# 1 1 1 1 20.0 30.0
# 2 1 1 2 80.0 90.0
# 3 1 2 1 20.1 30.1
# 4 1 2 2 80.1 90.1
# 5 1 3 1 20.2 30.2
# 6 1 3 2 80.2 90.2

Related

Loop to sum weekly rolling average

I am new to coding. I have a data set of daily stream flow averages over 20 years. Following is an example:
DATE FLOW
1 10/1/2001 88.2
2 10/2/2001 77.6
3 10/3/2001 68.4
4 10/4/2001 61.5
5 10/5/2001 55.3
6 10/6/2001 52.5
7 10/7/2001 49.7
8 10/8/2001 46.7
9 10/9/2001 43.3
10 10/10/2001 41.3
11 10/11/2001 39.3
12 10/12/2001 37.7
13 10/13/2001 35.8
14 10/14/2001 34.1
15 10/15/2001 39.8
I need to create a loop summing the previous 6 days as well as the current day (rolling weekly average), and print it to an array for the designated water year. I have already created an aggregate function to separate yearly average daily means into their designated water years.
# Separating dates into specific water years
wtr_yr <- function(dates, start_month=9)
# Convert dates into POSIXlt
POSIDATE = as.POSIXlt(NEW_DATE)
# Year offset
offset = ifelse(POSIDATE$mon >= start_month - 1, 1, 0)
# Water year
adj.year = POSIDATE$year + 1900 + offset
# Aggregating the water year function to take the mean
mean.FLOW=aggregate(data_set$FLOW,list(adj.year), mean)
It seems that it can be done much more easily.
But first I need to prepare a bit more data.
library(tidyverse)
library(lubridate)
df = tibble(
DATE = seq(mdy("1/1/2010"), mdy("12/31/2022"), 1),
FLOW = rnorm(length(DATE), 40, 10)
)
output
# A tibble: 4,748 x 2
DATE FLOW
<date> <dbl>
1 2010-01-01 34.4
2 2010-01-02 37.7
3 2010-01-03 55.6
4 2010-01-04 40.7
5 2010-01-05 41.3
6 2010-01-06 57.2
7 2010-01-07 44.6
8 2010-01-08 27.3
9 2010-01-09 33.1
10 2010-01-10 35.5
# ... with 4,738 more rows
Now let's do the aggregation by year and week number
df %>%
group_by(year(DATE), week(DATE)) %>%
summarise(mean = mean(FLOW))
output
# A tibble: 689 x 3
# Groups: year(DATE) [13]
`year(DATE)` `week(DATE)` mean
<dbl> <dbl> <dbl>
1 2010 1 44.5
2 2010 2 39.6
3 2010 3 38.5
4 2010 4 35.3
5 2010 5 44.1
6 2010 6 39.4
7 2010 7 41.3
8 2010 8 43.9
9 2010 9 38.5
10 2010 10 42.4
# ... with 679 more rows
Note, for the function week, the first week starts on January 1st. If you want to number the weeks according to the ISO 8601 standard, use the isoweek function. Alternatively, you can also use an epiweek compatible with the US CDC.
df %>%
group_by(year(DATE), isoweek(DATE)) %>%
summarise(mean = mean(FLOW))
output
# A tibble: 681 x 3
# Groups: year(DATE) [13]
`year(DATE)` `isoweek(DATE)` mean
<dbl> <dbl> <dbl>
1 2010 1 40.0
2 2010 2 45.5
3 2010 3 33.2
4 2010 4 38.9
5 2010 5 45.0
6 2010 6 40.7
7 2010 7 38.5
8 2010 8 42.5
9 2010 9 37.1
10 2010 10 42.4
# ... with 671 more rows
If you want to better understand how these functions work, please follow the code below
df %>%
mutate(
w1 = week(DATE),
w2 = isoweek(DATE),
w3 = epiweek(DATE)
)
output
# A tibble: 4,748 x 5
DATE FLOW w1 w2 w3
<date> <dbl> <dbl> <dbl> <dbl>
1 2010-01-01 34.4 1 53 52
2 2010-01-02 37.7 1 53 52
3 2010-01-03 55.6 1 53 1
4 2010-01-04 40.7 1 1 1
5 2010-01-05 41.3 1 1 1
6 2010-01-06 57.2 1 1 1
7 2010-01-07 44.6 1 1 1
8 2010-01-08 27.3 2 1 1
9 2010-01-09 33.1 2 1 1
10 2010-01-10 35.5 2 1 2
# ... with 4,738 more rows

Combine rows based on ranges in a column

I have a pretty large dataset where I have a column for time in seconds and I want to combine rows where the time is close (range: .1-.2 seconds apart) as a mean.
Here is an example of how the data looks:
BPM seconds
63.9 61.899
63.9 61.902
63.8 61.910
62.1 130.94
62.1 130.95
61.8 211.59
63.8 280.5
60.3 290.4
So I would want to combine the first 3 rows, then the 2 following after that, and the rest would stand alone. Meaning I would want the data to look like this:
BPM seconds
63.9 61.904
62.1 130.95
61.8 211.59
63.8 280.5
60.3 290.4
We need to create groups, this is the important bit, the rest is standard aggregation:
cumsum(!c(0, diff(df1$seconds)) < 0.2)
# [1] 0 0 0 1 1 2 3 4
Then aggregate using aggregate:
aggregate(df1[, 2], list(cumsum(!c(0, diff(df1$seconds)) < 0.2)), mean)
# Group.1 x
# 1 0 61.90367
# 2 1 130.94500
# 3 2 211.59000
# 4 3 280.50000
# 5 4 290.40000
Or use dplyr:
library(dplyr)
df1 %>%
group_by(myGroup = cumsum(!c(0, diff(seconds)) < 0.2)) %>%
summarise(BPM = first(BPM),
seconds = mean(seconds))
# # A tibble: 5 x 3
# myGroup BPM seconds
# <int> <dbl> <dbl>
# 1 0 63.9 61.9
# 2 1 62.1 131.
# 3 2 61.8 212.
# 4 3 63.8 280.
# 5 4 60.3 290.
Reproducible example data:
df1 <- read.table(text = "BPM seconds
63.9 61.899
63.9 61.902
63.8 61.910
62.1 130.94
62.1 130.95
61.8 211.59
63.8 280.5
60.3 290.4", header = TRUE)

Use plyr to make a 2d summary table [duplicate]

This question already has answers here:
How to reshape data from long to wide format
(14 answers)
Closed 7 years ago.
I have a dataset with columns ID, Score and Average age, using ddply(), I get the following table
ddply(data, .(id, score), summarize, group_mean = round(mean(avg_age), 1))
id score group_mean
1 101 0 61.8
2 101 5 70.3
3 101 10 62.2
4 2 0 41.0
5 2 5 40.4
6 2 10 44.5
7 23 0 52.0
8 23 5 52.6
9 25 0 74.5
10 25 5 55.2
11 25 10 48.0
12 28 0 53.4
13 28 5 49.5
14 3 0 41.3
15 3 5 47.8
16 3 10 46.6
17 4 0 53.3
18 4 5 54.2
19 4 10 55.3
20 X 0 72.0
21 X 5 57.1
22 X 10 53.4
What shoud I do if I want the table to look like a pivot table with the rows as id and columns as score? Namely,
0 5 10
101 61.8 70.3 62.2
2 41.0 40.4 44.5
...
We can use spread from tidyr. If out is the result obtained from summarize output from ddply
library(tidyr)
spread(out, score, group_mean)
Or acast from reshape2
library(reshape2)
acast(out, id~score, value.var='group_mean')
Or using base R
xtabs(group_mean~id+score, out)

Time-series data visualization

I have a pretty large data frame in R stored in long form. It contains body temperature data collected from 40 different individuals, with 10 sec intervals, over 16 days. Individuals have been exposed to conditions (cond1 and cond2). It essentially looks like this:
ID Cond1 Cond2 Day ToD Temp
1 A B 1 18.0 37.1
1 A B 1 18.3 37.2
1 A B 2 18.6 37.5
2 B A 1 18.0 37.0
2 B A 1 18.3 36.9
2 B A 2 18.6 36.9
3 A A 1 18.0 36.8
3 A A 1 18.3 36.7
3 A A 2 18.6 36.7
...
I want to create four separate line plots for each combination of conditions(AB, BA, AA, BB) that shows mean temp over time (day 1-16).
p.s. ToD stands for time of day. Not sure if I need to provide it in order to create the plot.
So far I have tried to define the dataset as time series by doing
ts <- ts(data=dataset$Temp, start=1, end=16, frequency=8640)
plot(ts)
This returns a plot of Temp, but I can't figure out how to define condition values for breaking up the data.
Edit:
Essentially I want a plot that looks like this 1, but one for each group separately, and using mean Temp values. This plot is just for one individual in one condition, and I want one that shows the mean for all individuals in the same condition.
You can use summarise and group_by to group the data by condition and then plot it. Is this what you're looking for?
library(dplyr)
## I created a dataframe df that looks like this:
ID Cond1 Cond2 Day ToD Temp
1 1 A B 1 18.0 37.1
2 1 A B 1 18.3 37.2
3 1 A B 2 18.6 37.5
4 2 B A 1 18.0 37.0
5 2 B A 1 18.3 36.9
6 2 B A 2 18.6 36.9
7 3 A A 1 18.0 36.8
8 3 A A 1 18.3 36.7
9 3 A A 2 18.6 36.7
df$Cond <- paste0(df$Cond1, df$Cond2)
d <- summarise(group_by(df, Cond, Day), t = mean(Temp))
ggplot(d, aes(Day, t, color = Cond)) + geom_line()
which results in:

How to compute the mean for the last few rows in each time period in a data frame?

I have data collected for a few subjects, every 15 seconds over an hour split up by periods. Here's how the dataframe looks like, the time is "Temps", subjects are "Sujet" and the periods are determined by "Palier".
data.frame': 2853 obs. of 22 variables:
$ Temps : Factor w/ 217 levels "00:15","00:30",..: 1 2 3 4 5 6 7 8 9 10 ...
$ Sujet : int 1 1 1 1 1 1 1 1 1 1 ...
$ Test : Factor w/ 3 levels "VO2max","Tlim",..: 1 1 1 1 1 1 1 1 1 1 ...
$ Palier : int 1 1 1 1 1 1 1 1 1 1 ...
$ RPE : int 8 8 8 8 8 8 8 8 8 8 ...
$ Rmec : num 39.1 27.5 23.3 21.5 20.3 21.7 20.5 20.7 20.2 20.1 ...
Here a glimpse of the data.frame:
Temps Sujet Test Palier RPE Rmec Pmec Pchim Fr Vt VE FeO2 FeCO2 VO2 VCO2 RER HR VO2rel VE.VO2 VE.VCO2
1 00:15 1 VO2max 1 8 39.1 185 473.6 19 1854 34.60 16.24 4.48 1353 1268 0.94 121 17.6 0.02557280 0.02728707
2 00:30 1 VO2max 1 8 27.5 185 672.4 17 2602 44.30 15.77 4.78 1921 1731 0.90 124 25.0 0.02306091 0.02559214
3 00:45 1 VO2max 1 8 23.3 185 794.5 18 2793 50.83 15.63 4.85 2270 2015 0.89 131 29.6 0.02239207 0.02522581
4 01:00 1 VO2max 1 8 21.5 185 860.3 20 2756 55.76 15.68 4.88 2458 2224 0.90 137 32.0 0.02268511 0.02507194
5 01:15 1 VO2max 1 8 20.3 185 909.3 23 2709 61.26 15.84 4.88 2598 2446 0.94 139 33.8 0.02357968 0.02504497
6 01:30 1 VO2max 1 8 21.7 185 853.7 21 2899 59.85 16.00 4.89 2439 2395 0.98 140 31.8 0.02453875 0.02498956
Each "Palier" lasts about 5 min and there are from 5 to 10 "Palier". For each subject and "Palier", I need to compute the mean for the last 2 min for all the variables. I haven't figured it out yet with dcast() or ddply(), but I am a newbie!
Any help would be much appreciated!
If you turned it into a data.table (which you'd have to install), you could do this with
library(data.table)
dt = as.data.table(d) # assuming your existing data frame was called d
last.two.min = dt[, mean(tail(Rmec, 9)), by=Sujet]
This assumes that your original data frame was called d, and that you want the last 9 items (since it is every 15 seconds- you might want the last 8 if you want from 58:15 to 60:00).
I assumed that Rmec was the variable you wanted to get the mean for. If there are multiple for which you want to get the mean, you can do something like:
last.two.min = dt[, list(mean.Rmec=mean(tail(Rmec, 9)),
mean.RPE=mean(tail(RPE, 9))), by=Sujet]

Resources