Related
I guess something similar should have been asked before, however I could only find an answer for python and SQL. So please notify me in the comments when this was also asked for R!
Data
Let's say we have a dataframe like this:
set.seed(1); df <- data.frame( position = 1:20,value = sample(seq(1,100), 20))
# In cause you do not get the same dataframe see the comment by #Ian Campbell - thanks!
position value
1 1 27
2 2 37
3 3 57
4 4 89
5 5 20
6 6 86
7 7 97
8 8 62
9 9 58
10 10 6
11 11 19
12 12 16
13 13 61
14 14 34
15 15 67
16 16 43
17 17 88
18 18 83
19 19 32
20 20 63
Goal
I'm interested in calculating the average value for n positions and subtract this from the average value of the next n positions, let's say n=5 for now.
What I tried
I now used this method, however when I apply this to a bigger dataframe it takes a huge amount of time, and hence wonder if there is a faster method for this.
calc <- function( pos ) {
this.five <- df %>% slice(pos:(pos+4))
next.five <- df %>% slice((pos+5):(pos+9))
differ = mean(this.five$value)- mean(next.five$value)
data.frame(dif= differ)
}
df %>%
group_by(position) %>%
do(calc(.$position))
That produces the following table:
position dif
<int> <dbl>
1 1 -15.8
2 2 9.40
3 3 37.6
4 4 38.8
5 5 37.4
6 6 22.4
7 7 4.20
8 8 -26.4
9 9 -31
10 10 -35.4
11 11 -22.4
12 12 -22.3
13 13 -0.733
14 14 15.5
15 15 -0.400
16 16 NaN
17 17 NaN
18 18 NaN
19 19 NaN
20 20 NaN
I suspect a data.table approach may be faster.
library(data.table)
setDT(df)
df[,c("roll.position","rollmean") := lapply(.SD,frollmean,n=5,fill=NA, align = "left")]
df[, result := rollmean[.I] - rollmean[.I + 5]]
df[,.(position,value,rollmean,result)]
# position value rollmean result
# 1: 1 27 46.0 -15.8
# 2: 2 37 57.8 9.4
# 3: 3 57 69.8 37.6
# 4: 4 89 70.8 38.8
# 5: 5 20 64.6 37.4
# 6: 6 86 61.8 22.4
# 7: 7 97 48.4 4.2
# 8: 8 62 32.2 -26.4
# 9: 9 58 32.0 -31.0
#10: 10 6 27.2 -35.4
#11: 11 19 39.4 -22.4
#12: 12 16 44.2 NA
#13: 13 61 58.6 NA
#14: 14 34 63.0 NA
#15: 15 67 62.6 NA
#16: 16 43 61.8 NA
#17: 17 88 NA NA
#18: 18 83 NA NA
#19: 19 32 NA NA
#20: 20 63 NA NA
Data
RNGkind(sample.kind = "Rounding")
set.seed(1); df <- data.frame( position = 1:20,value = sample(seq(1,100), 20))
RNGkind(sample.kind = "default")
I have a longitudinal dataset in the long form with the length of around 2800, with around 400 participants in total. Here's a sample of my data.
# ID wave score sex age edu
#1 1001 1 28 1 69 12
#2 1001 2 27 1 70 12
#3 1001 3 28 1 71 12
#4 1001 4 26 1 72 12
#5 1002 1 30 2 78 9
#6 1002 3 30 2 80 9
#7 1003 1 30 2 65 16
#8 1003 2 30 2 66 16
#9 1003 3 29 2 67 16
#10 1003 4 28 2 68 16
#11 1004 1 22 2 85 4
#12 1005 1 20 2 60 9
#13 1005 2 18 1 61 9
#14 1006 1 22 1 74 9
#15 1006 2 23 1 75 9
#16 1006 3 25 1 76 9
#17 1006 4 19 1 77 9
I want to create a new column "cutoff" with values "Normal" or "Impaired" because my outcome data, "score" has a cutoff score indicating impairment according to norm. The norm consists of different -1SD measures(the cutoff point) according to Sex, Edu(year of education), and Age.
Below is what I'm currently doing, checking an excel file myself and putting in the corresponding cutoff score according to the three conditions. First of all, I am not sure if I am creating the right column.
data$cutoff <- ifelse(data$sex==1 & data$age<70
& data$edu<3
& data$score<19.91, "Impaired", "Normal")
data$cutoff <- ifelse(data$sex==2 & data$age<70
& data$edu<3
& data$score<18.39, "Impaired", "Normal")
Additionally, I am wondering if I can import an excel file stating the norm, and create a column according to the values in it.
The excel file has a structure as shown below.
# Sex Male Female
#60-69 Edu(yr) 0-3 4-6 7-12 13>= 0-3 4-6 7-12 13>=
#Age Number 22 51 119 72 130 138 106 51
# Mean 24.45 26.6 27.06 27.83 23.31 25.86 27.26 28.09
# SD 3.03 1.89 1.8 1.53 3.28 2.55 1.85 1.44
# -1.5SD' 19.92 23.27 23.76 24.8 18.53 21.81 23.91 25.15
#70-79 Edu(yr) 0-3 4-6 7-12 13>= 0-3 4-6 7-12 13>=
....
I have created new columns "agecat" and "educat," allocating each ID into a group of age and education used in the norm. Now I want to make use of these columns, matching it with rows and columns of the excel file above. One of the motivations is to create a code that can be used for further research using the test scores of my data.
I think your ifelse statements should work fine, but I would definitely import the Excel file rather than hardcoding it, though you may need to structure it a bit differently. I would structure it just like a dataset, with columns for Sex, Edu, Age, Mean, SD, -1.5SD, etc., import it into R, then do a left outer join on Sex+Edu+Age:
merge(x = long_df, y = norm_df, by = c("Sex", "Edu(yr)", "Age"), all.x = TRUE)
Then you can compare the columns directly.
If I understand correctly, the OP wants to mark a certain type of outliers in his dataset. So, there are two tasks here:
Compute the statistics mean(score), sd(score), and cutoff value mean(score) - 1.5 * sd(score) for each group of sex, age category agecat, and edu category edcat.
Find all rows where score is lower than the cutoff value for the particular group.
As already mentioned by hannes101, the second step can be implemented by a non-equi join.
library(data.table)
# categorize age and edu (left closed intervals)
mydata[, c("agecat", "educat") := .(cut(age, c(seq(0, 90, 10), Inf), right = FALSE),
cut(edu, c(0, 4, 7, 13, Inf), right = FALSE))][]
# compute statistics
cutoffs <- mydata[, .(.N, Mean = mean(score), SD = sd(score),
m1.5SD = mean(score) - 1.5 * sd(score)),
by = .(sex, agecat, educat)]
# non-equi update join
mydata[, cutoff := "Normal"]
mydata[cutoffs, on = .(sex, agecat, educat, score < m1.5SD), cutoff := "Impaired"][]
mydata
ID wave score sex age edu agecat educat cutoff
1: 1001 1 28 1 69 12 [60,70) [7,13) Normal
2: 1001 2 27 1 70 12 [70,80) [7,13) Normal
3: 1001 3 28 1 71 12 [70,80) [7,13) Normal
4: 1001 4 26 1 72 12 [70,80) [7,13) Normal
5: 1002 1 30 2 78 9 [70,80) [7,13) Normal
6: 1002 3 30 2 80 9 [80,90) [7,13) Normal
7: 1003 1 33 2 65 16 [60,70) [13,Inf) Normal
8: 1003 2 32 2 66 16 [60,70) [13,Inf) Normal
9: 1003 3 31 2 67 16 [60,70) [13,Inf) Normal
10: 1003 4 24 2 68 16 [60,70) [13,Inf) Impaired
11: 1004 1 22 2 85 4 [80,90) [4,7) Normal
12: 1005 1 20 2 60 9 [60,70) [7,13) Normal
13: 1005 2 18 1 61 9 [60,70) [7,13) Normal
14: 1006 1 22 1 74 9 [70,80) [7,13) Normal
15: 1006 2 23 1 75 9 [70,80) [7,13) Normal
16: 1006 3 25 1 76 9 [70,80) [7,13) Normal
17: 1006 4 19 1 77 9 [70,80) [7,13) Normal
18: 1007 1 33 2 65 16 [60,70) [13,Inf) Normal
19: 1007 2 32 2 66 16 [60,70) [13,Inf) Normal
20: 1007 3 31 2 67 16 [60,70) [13,Inf) Normal
21: 1007 4 31 2 68 16 [60,70) [13,Inf) Normal
ID wave score sex age edu agecat educat cutoff
In this made-up example there is only one row which meets the "Impaired" conditions.
Likewise, the statistics is rather sparsely populated:
cutoffs
sex agecat educat N Mean SD m1.5SD
1: 1 [60,70) [7,13) 2 23.00000 7.071068 12.39340
2: 1 [70,80) [7,13) 7 24.28571 3.147183 19.56494
3: 2 [70,80) [7,13) 1 30.00000 NA NA
4: 2 [80,90) [7,13) 1 30.00000 NA NA
5: 2 [60,70) [13,Inf) 8 30.87500 2.900123 26.52482
6: 2 [80,90) [4,7) 1 22.00000 NA NA
7: 2 [60,70) [7,13) 1 20.00000 NA NA
Data
OP's sample dataset has been modified in one group for demonstration.
library(data.table)
mydata <- fread("
# ID wave score sex age edu
#1 1001 1 28 1 69 12
#2 1001 2 27 1 70 12
#3 1001 3 28 1 71 12
#4 1001 4 26 1 72 12
#5 1002 1 30 2 78 9
#6 1002 3 30 2 80 9
#7 1003 1 33 2 65 16
#8 1003 2 32 2 66 16
#9 1003 3 31 2 67 16
#10 1003 4 24 2 68 16
#11 1004 1 22 2 85 4
#12 1005 1 20 2 60 9
#13 1005 2 18 1 61 9
#14 1006 1 22 1 74 9
#15 1006 2 23 1 75 9
#16 1006 3 25 1 76 9
#17 1006 4 19 1 77 9
#18 1007 1 33 2 65 16
#19 1007 2 32 2 66 16
#20 1007 3 31 2 67 16
#21 1007 4 31 2 68 16
", drop = 1L)
Starting from this SO question.
Example data.frame:
df = read.table(text = 'ID Day Count Count_group
18 1933 6 15
33 1933 6 15
37 1933 6 15
18 1933 6 15
16 1933 6 15
11 1933 6 15
111 1932 5 9
34 1932 5 9
60 1932 5 9
88 1932 5 9
18 1932 5 9
33 1931 3 4
13 1931 3 4
56 1931 3 4
23 1930 1 1
6 1800 6 12
37 1800 6 12
98 1800 6 12
52 1800 6 12
18 1800 6 12
76 1800 6 12
55 1799 4 6
6 1799 4 6
52 1799 4 6
133 1799 4 6
112 1798 2 2
677 1798 2 2
778 888 4 8
111 888 4 8
88 888 4 8
10 888 4 8
37 887 2 4
26 887 2 4
8 886 1 2
56 885 1 1
22 120 2 6
34 120 2 6
88 119 1 6
99 118 2 5
12 118 2 5
90 117 1 3
22 115 2 2
99 115 2 2', header = TRUE)
The Count col shows the total number of ID values per each Day and the Count_group col shows the sum of the ID values per each Day, Day - 1, Day -2, Day -3 and Day -4.
e.g. 1933 = Count_group 15 because Count 6 (1933) + Count 5 (1932) + Count 3 (1931) + Count 1 (1930) + Count 0 (1929).
What I need to do is to create duplicated observations per each Count_group and add them to it in order to show per each Count_group its Day, Day - 1, Day -2, Day -3 and Day -4.
e.g. Count_group = 15 is composed by the Count values of Day 1933, 1932, 1931, 1930 (and 1929 not present in the df). So the five days needs to be included in the Count_group = 15. The next one will be Count_group = 9, composed by 1932, 1931, 1930, 1929 and 1928; etc...
Desired output:
ID Day Count Count_group
18 1933 6 15
33 1933 6 15
37 1933 6 15
18 1933 6 15
16 1933 6 15
11 1933 6 15
111 1932 5 15
34 1932 5 15
60 1932 5 15
88 1932 5 15
18 1932 5 15
33 1931 3 15
13 1931 3 15
56 1931 3 15
23 1930 1 15
111 1932 5 9
34 1932 5 9
60 1932 5 9
88 1932 5 9
18 1932 5 9
33 1931 3 9
13 1931 3 9
56 1931 3 9
23 1930 1 9
33 1931 3 4
13 1931 3 4
56 1931 3 4
23 1930 1 4
23 1930 1 1
6 1800 6 12
37 1800 6 12
98 1800 6 12
52 1800 6 12
18 1800 6 12
76 1800 6 12
55 1799 4 12
6 1799 4 12
52 1799 4 12
133 1799 4 12
112 1798 2 12
677 1798 2 12
55 1799 4 6
6 1799 4 6
52 1799 4 6
133 1799 4 6
112 1798 2 6
677 1798 2 6
112 1798 2 2
677 1798 2 2
778 888 4 8
111 888 4 8
88 888 4 8
10 888 4 8
37 887 2 8
26 887 2 8
8 886 1 8
56 885 1 8
37 887 2 4
26 887 2 4
8 886 1 4
56 885 1 4
8 886 1 2
56 885 1 2
56 885 1 1
22 120 2 6
34 120 2 6
88 119 1 6
99 118 2 6
12 118 2 6
90 117 1 6
88 119 1 6
99 118 2 6
12 118 2 6
90 117 1 6
22 115 2 6
99 115 2 6
99 118 2 5
12 118 2 5
90 117 1 5
22 115 2 5
99 115 2 5
90 117 1 3
22 115 2 3
99 115 2 3
22 115 2 2
99 115 2 2
(note that different group of 5 days each one have been separated by a blank line in order to make them clearer)
I have got different data.frames which are grouped by n days and therefore I would like to adapt the code (by changing it a little) specifically for each of them.
Thanks
A generalised version of my previous answer...
#first add grouping variables
days <- 5 #grouping no of days
df$smalldaygroup <- c(0,cumsum(sapply(2:nrow(df),function(i) df$Day[i]!=df$Day[i-1]))) #individual days
df$bigdaygroup <- c(0,cumsum(sapply(2:nrow(df),function(i) df$Day[i]<df$Day[i-1]-days+1))) #blocks of linked days
#duplicate days in each big group
df2 <- lapply(split(df,df$bigdaygroup),function(x) {
n <- max(x$Day)-min(x$Day)+1 #number of consecutive days in big group
dayvec <- (max(x$Day):min(x$Day)) #possible days in range
daylog <- dayvec[dayvec %in% x$Day] #actual days in range
pattern <- data.frame(base=rep(dayvec,each=days))
pattern$rep <- sapply(1:nrow(pattern),function(i) pattern$base[i]+1-sum(pattern$base[1:i]==pattern$base[i])) #indices to repeat
pattern$offset <- match(pattern$rep,daylog)-match(pattern$base,daylog) #offsets (used later)
pattern <- pattern[(pattern$base %in% x$Day) & (pattern$rep %in% x$Day),] #remove invalid elements
#store pattern in list as offsets needed in next loop
return(list(df=split(x,x$smalldaygroup)[match(pattern$rep,daylog)],pat=pattern))
})
#change the Count_group to previous value in added entries
df2 <- lapply(df2,function(L) lapply(1:length(L$df),function(i) {
x <- L$df[[i]]
offset <- L$pat$offset #pointer to day to copy Count_group from
x$Count_group <- L$df[[i-offset[i]]]$Count_group[1]
return(x)
}))
df2 <- do.call(rbind,unlist(df2,recursive=FALSE)) #bind back together
df2[,5:6] <- NULL #remove grouping variables
head(df2,30) #ignore rownames!
ID Day Count Count_group
01.1 18 1933 6 15
01.2 33 1933 6 15
01.3 37 1933 6 15
01.4 18 1933 6 15
01.5 16 1933 6 15
01.6 11 1933 6 15
02.7 111 1932 5 15
02.8 34 1932 5 15
02.9 60 1932 5 15
02.10 88 1932 5 15
02.11 18 1932 5 15
03.12 33 1931 3 15
03.13 13 1931 3 15
03.14 56 1931 3 15
04 23 1930 1 15
05.7 111 1932 5 9
05.8 34 1932 5 9
05.9 60 1932 5 9
05.10 88 1932 5 9
05.11 18 1932 5 9
06.12 33 1931 3 9
06.13 13 1931 3 9
06.14 56 1931 3 9
07 23 1930 1 9
08.12 33 1931 3 4
08.13 13 1931 3 4
08.14 56 1931 3 4
09 23 1930 1 4
010 23 1930 1 1
11.16 6 1800 6 12
I attach a rather mechanical method, but I believe it is a good starting point.
I have noticed that in your original table the entry
ID Day Count Count_group
18 1933 6 14
is duplicated; I have left it untouched for sake of clarity.
Structure of the approach:
Read original data
Generate list of data frames, for each Day
Generate final data frame, collapsing the list in 2.
1. Read original data
We start with
df = read.table(text = 'ID Day Count Count_group
18 1933 6 14
33 1933 6 14
37 1933 6 14
18 1933 6 14
16 1933 6 14
11 1933 6 14
111 1932 5 9
34 1932 5 9
60 1932 5 9
88 1932 5 9
18 1932 5 9
33 1931 3 4
13 1931 3 4
56 1931 3 4
23 1930 1 1
6 1800 6 12
37 1800 6 12
98 1800 6 12
52 1800 6 12
18 1800 6 12
76 1800 6 12
55 1799 4 6
6 1799 4 6
52 1799 4 6
133 1799 4 6
112 1798 2 2
677 1798 2 2
778 888 4 7
111 888 4 7
88 888 4 7
10 888 4 7
37 887 2 4
26 887 2 4
8 886 1 2
56 885 1 1', header = TRUE)
# ordered vector of unique values for "Day"
ord_day <- unique(df$Day[order(df$Day)])
ord_day
[1] 885 886 887 888 1798 1799 1800 1930 1931 1932 1933
2. Generate list of data frames, for each Day
For each element in ord_day we introduce a data.frame as element of a list called df_new_aug.
Such data frames are defined through a for loop for all values in ord_day except ord_day[2] and ord_day[1] which are treated separately.
Idea behind the looping: for each unique ord_day[i] with i > 2 we check which days between ord_day[i-1] and ord_day[i-2] (or both!) contribute (through the variable "Count") to the value "Count_Group" at ord_day[i].
We therefore introduce if else statements in the loop.
Here we go
# Recursive generation of the list of data.frames (for days > 886)
#-----------------------------------------------------------------
df_new <- list()
df_new_aug <- list()
# we exclude cases i=1, 2: they are manually treated below
for ( i in 3: length(ord_day) ) {
# is "Count_Group" for ord_day[i] equal to the sum of "Count" at ord_day[i-1] and ord_day[i-2]?
if ( unique(df[df$Day == ord_day[i], "Count_group"]) == unique(df[df$Day == ord_day[i], "Count"]) +
unique(df[df$Day == ord_day[i-1], "Count"]) + unique(df[df$Day == ord_day[i-2], "Count"])
) {
# we create columns ID | Day | Count
df_new[[i]] <- data.frame(df[df$Day == ord_day[i] | df$Day == ord_day[i-1] | df$Day == ord_day[i-2],
c("ID", "Day", "Count")])
# we append the Count_Group of the Day in ord_day[i]
df_new_aug[[i]] <- data.frame( df_new[[i]],
Count_group = rep(unique(df[df$Day == ord_day[i], "Count_group"]), nrow(df_new[[i]]) ) )
} else if (unique(df[df$Day == ord_day[i], "Count_group"]) == unique(df[df$Day == ord_day[i], "Count"]) +
unique(df[df$Day == ord_day[i-1], "Count"]) ) #only "Count" at i and i-1 contribute to "Count_group" at i
{
df_new[[i]] <- data.frame(df[df$Day == ord_day[i] | df$Day == ord_day[i-1],
c("ID", "Day", "Count")])
# we append the Count_Group of the Day in ord_day[2]
df_new_aug[[i]] <- data.frame(df_new[[i]],
Count_group = rep(unique(df[df$Day == ord_day[i], "Count_group"]), nrow(df_new[[i]]) ) )
} else #only "Count" at i contributes to "Count_group" at i
df_new[[i]] <- data.frame(df[df$Day == ord_day[i],
c("ID", "Day", "Count")])
# we append the Count_Group of the Day in ord_day[i]
df_new_aug[[i]] <- data.frame(df_new[[i]],
Count_group = rep(unique(df[df$Day == ord_day[i], "Count_group"]), nrow(df_new[[i]]) ) )
#closing the for loop
}
# for ord_day[2] = "886" (both "Count" at i =2 and i = 1 contribute to "Count_group" at i=2)
#-------------------------------------------------------------------------------------
df_new[[2]] <- data.frame(df[df$Day == ord_day[2] | df$Day == ord_day[1],
c("ID", "Day", "Count")])
# we append the Count_Group of the Day in ord_day[2]
df_new_aug[[2]] <- data.frame(df_new[[2]],
Count_group = rep(unique(df[df$Day == ord_day[2], "Count_group"]), nrow(df_new[[2]]) ) )
# for ord_day[1] = "885" (only "count" at i = 1 contributes to "Count_group" at i =1)
#------------------------------------------------------------------------------------
df_new[[1]] <- data.frame(df[df$Day == ord_day[1], c("ID", "Day", "Count")])
# we append the Count_Group of the Day in ord_day[i]
df_new_aug[[1]] <- data.frame(df_new[[1]], Count_group = rep(unique(df[df$Day == ord_day[1], "Count_group"]), nrow(df_new[[1]]) ) )
# produced list
df_new_aug
3. Generate final data frame, collapsing the list in 2.
We collapse df_new_aug through an ugly loop, but other solutions (for example with Reduce() and merge() are possible):
# merging the list (mechanically): final result
df_result <- df_new_aug[[1]]
for (i in 1:10){
df_result <- rbind(df_result, df_new_aug[[i+1]])
}
One arrives at df_result and the analysis is stopped.
I am trying to move from long format data to wide format in order to do some correlation analyses.
But, dcast seems to create to rows for the first subject and splits the data across those two rows filling the created empty cells with NA.
The first 2 subjects were being duplicated when I was using alphanumeric subject codes, I went to numeric subject numbers and that has to down to only the first subject being duplicated.
the first few lines of the long format data frame:
Subject Age Gender R_PTA L_PTA BE_PTA Avg_PTA L_Aided_SII R_Aided_SII Best_Aided_SII L_Unaided_SII R_Unaided_SII Best_Unaided_SII L_SII_Diff R_SII_Diff
1 1 74 M 48.33 53.33 48.33 50.83 31 42 42 14 25 25 17 17
2 2 77 F 36.67 36.67 36.67 36.67 73 67 73 44 43 44 29 24
3 3 72 F 45.00 41.67 41.67 43.33 42 34 42 35 28 35 7 6
4 4 66 F 36.67 36.67 36.67 36.67 66 76 76 44 44 44 22 32
5 5 38 F 41.67 46.67 41.67 44.17 48 58 58 23 29 29 25 29
6 6 65 M 35.00 43.33 35.00 39.17 46 60 60 32 46 46 14 14
Best_SII_Diff rSII MoCA_Vis MoCA_Nam MoCA_Attn MoCA_Lang MoCA_Abst MoCA_Del_Rec MoCA_Ori MoCA_Tot PNT Semantic Aided PNT_Prop PNT_Prop_Mod
1 17 -0.4231157 5 3 6 2 2 2 6 26 0.971 0.029 Unaided 0.971 0.983
2 29 1.2739255 3 3 5 0 2 2 5 20 0.954 0.046 Unaided 0.960 0.966
3 7 -1.2777889 4 2 5 2 2 5 6 26 0.966 0.034 Unaided 0.960 0.982
4 32 1.5959701 5 3 6 3 2 5 6 30 0.983 0.017 Unaided 0.983 0.994
5 29 0.9492167 4 2 6 3 1 3 6 25 0.983 0.017 Unaided 0.983 0.994
6 14 -0.2936395 4 2 6 2 2 2 6 24 0.989 0.011 Unaided 0.989 0.994
PNT_S_Wt PNT_P_Wt
1 0.046 0.041
2 0.073 0.033
3 0.045 0.074
4 0.049 0.057
5 0.049 0.057
6 0.049 0.057
Creating varlist:
varlist <- list(colnames(subset(PNT_Data_All2, ,c(18:27,29:33))))
My dcast command:
Data_Wide <- dcast(as.data.table(PNT_Data_All2),Subject + Age + Gender + R_PTA + L_PTA + BE_PTA + Avg_PTA + L_Aided_SII + R_Aided_SII + Best_Aided_SII + L_Unaided_SII + R_Unaided_SII + Best_Unaided_SII + L_SII_Diff + R_SII_Diff + Best_SII_Diff + rSII ~ Aided, value.var=varlist)
The resulting first few lines of the wide format:
Subject Age Gender R_PTA L_PTA BE_PTA Avg_PTA L_Aided_SII R_Aided_SII Best_Aided_SII L_Unaided_SII R_Unaided_SII Best_Unaided_SII L_SII_Diff R_SII_Diff
1: 1 74 M 48.33 53.33 48.33 50.83 31 42 42 14 25 25 17 17
2: 1 74 M 48.33 53.33 48.33 50.83 31 42 42 14 25 25 17 17
3: 2 77 F 36.67 36.67 36.67 36.67 73 67 73 44 43 44 29 24
4: 3 72 F 45.00 41.67 41.67 43.33 42 34 42 35 28 35 7 6
5: 4 66 F 36.67 36.67 36.67 36.67 66 76 76 44 44 44 22 32
6: 5 38 F 41.67 46.67 41.67 44.17 48 58 58 23 29 29 25 29
Notice Subject 1 has 2 entries. All of the other subjects seem correct
Is this a problem with my command/arguments? A bug in dcast?
Edit 1: Through the process of elimination, the extra entries only appear when I include the "rSII" variable. This is a variable that is calculated from a previous step in the script:
PNT_Data_All$rSII <- stdres(lm(Best_Aided_SII ~ Best_Unaided_SII, data=PNT_Data_All))
PNT_Data_All <- PNT_Data_All[, colnames(PNT_Data_All)[c(1:17,34,18:33)]]
Is there something about that calculated variable that would mess up dcast for some subjects?
Edit 2 to add my workaround:
I ended up rounding the calculated variable to 3 digits after the decimal and that solved the problem. Everything is casting correctly now with no duplicates.
PNT_Data_All$rSII <- format(round(stdres(lm(Best_Aided_SII ~ Best_Unaided_SII, data=PNT_Data_All)),3),nsmall=3)
I would like to plot things like (where C is column):
C4 vs C2 for all similar C1 and
C1 vs C4 for all similar C2
The data frame in question is:
C1 C2 C3 C4
1 2012-12-28 0 NA 10773
2 2012-12-28 5 NA 34112
3 2012-12-28 10 NA 30901
4 2012-12-28 0 NA 12421
5 2012-12-30 0 NA 3925
6 2012-12-30 5 NA 17436
7 2012-12-30 10 NA 13717
8 2012-12-30 15 NA 36708
9 2012-12-30 20 NA 28408
10 2012-12-30 NA NA 2880
11 2013-01-02 0 -13.89 9972
12 2013-01-02 5 -13.89 10576
13 2013-01-02 10 -13.89 33280
14 2013-01-02 15 -13.89 28667
15 2013-01-02 20 -13.89 21104
16 2013-01-02 25 -13.89 24771
17 2013-01-02 NA NA 22
18 2013-01-05 0 -3.80 20727
19 2013-01-05 5 -3.80 2033
20 2013-01-05 10 -3.80 16045
21 2013-01-05 15 -3.80 12074
22 2013-01-05 20 -3.80 10095
23 2013-01-05 NA NA 32693
24 2013-01-08 0 -1.70 19579
25 2013-01-08 5 -1.70 20200
26 2013-01-08 10 -1.70 12263
27 2013-01-08 15 -1.70 28797
28 2013-01-08 20 -1.70 23963
29 2013-01-11 0 -2.30 26525
30 2013-01-11 5 -2.30 21472
31 2013-01-11 10 -2.30 9633
32 2013-01-11 15 -2.30 27849
33 2013-01-11 20 -2.30 23950
34 2013-01-17 0 1.40 16271
35 2013-01-17 5 1.40 18581
36 2013-01-19 0 0.10 5910
37 2013-01-19 5 0.10 16890
38 2013-01-19 10 0.10 13078
39 2013-01-19 NA NA 55
40 2013-01-23 0 -9.20 15048
41 2013-01-23 6 -9.20 20792
42 2013-01-26 0 NA 21649
43 2013-01-26 6 NA 24655
44 2013-01-29 0 0.10 9100
45 2013-01-29 5 0.10 27514
46 2013-01-29 10 0.10 19392
47 2013-01-29 15 0.10 21720
48 2013-01-29 NA 0.10 112
49 2013-02-11 0 0.40 13619
50 2013-02-11 5 0.40 2748
51 2013-02-11 10 0.40 1290
52 2013-02-11 15 0.40 762
53 2013-02-11 20 0.40 1125
54 2013-02-11 25 0.40 1709
55 2013-02-11 30 0.40 29459
56 2013-02-11 35 0.40 106474
57 2013-02-13 0 1.30 3355
58 2013-02-13 5 1.30 970
59 2013-02-13 10 1.30 2240
60 2013-02-13 15 1.30 35871
61 2013-02-18 0 -0.60 8564
62 2013-02-20 0 -1.20 12399
63 2013-02-26 0 0.30 2985
64 2013-02-26 5 0.30 9891
65 2013-03-01 0 0.90 5221
66 2013-03-01 5 0.90 9736
67 2013-03-05 0 0.60 3192
68 2013-03-05 5 0.60 4243
69 2013-03-09 0 0.10 45138
70 2013-03-09 5 0.10 55534
71 2013-03-12 0 1.40 7278
72 2013-03-12 NA NA 45
73 2013-03-15 0 0.30 2447
74 2013-03-15 5 0.30 2690
75 2013-03-18 0 -2.30 3008
76 2013-03-22 0 -0.90 11411
77 2013-03-22 5 -0.90 NA
78 2013-03-22 10 -0.90 17675
79 2013-03-22 NA NA 47
80 2013-03-25 0 1.20 9802
81 2013-03-25 5 1.20 15790
There are other posts here about time series subseting and merging/matching/pasting subseting, but I think I miss the point when I'm trying to follow those instructions.
The end goal is to have a plot of C1 vs C4 for every C2 = 0 C2 = 5 and so on. Same thing for C4 vs C2 for every same C1. I know there are some duplicate C1 and C2, but the C4 for those values can be averaged. I can figure these plots out, I just need to know how to subset the data in this way. Perhaps creating a new data.frame() with these subsets could be the easiest?
Thanks in advance,
It's relatively easy to plot subsets using ggplot2. First you need to reshape your data from "wide" to "long" format, creating a new categorical variable with possible values C4 and C5.
library(reshape2)
library(ggplot2)
# Starting with the data you posted in a data frame called "dat":
# Convert C2 to date format
dat$C2 = as.Date(dat$C2)
# Reshape data to long format
dat.m = melt(dat, id.var=c("C1","C2","C3"))
# Plot values of C4 and C5 vs. C2 with separate lines for each level of C3
ggplot(dat.m, aes(x=C2, y=value, group=C3, colour=as.factor(C3))) +
geom_line() + geom_point() +
facet_grid(variable ~ ., scales="free_y")
The C4 lines are the same for every level of C3, so they all overlap each other.
You can also have a separate panel for each level of C3:
ggplot(dat.m, aes(x=C2, y=value, group=variable, colour=variable)) +
geom_line() + geom_point() +
facet_grid(variable ~ C3, scales="free_y") +
theme(axis.text.x=element_text(angle=-90)) +
guides(colour=FALSE)
Here's a base graphics method to getting separate plots. I'm using your new column names below:
# Use lapply to create a separate plot for each level of C2
lapply(na.omit(unique(dat$C2)), function(x) {
# The next line of code removes NA values so that there will be a line through
# every point. You can remove this line if you don't care whether all points
# are connected or not.
dat = dat[complete.cases(dat[,c("C1","C2","C4")]),]
# Create a plot of C4 vs. C1 for the current value of C2
plot(dat$C1[dat$C2==x], dat$C4[dat$C2==x],
type="o", pch=16,
xlab=paste0("C2=",x), ylab="C4")
})