Missing columns when using max in data.table [duplicate] - r

This question already has an answer here:
Subset rows corresponding to max value by group using data.table
(1 answer)
Closed 7 years ago.
I am trying to get the top frequency words in a data.table
data.table : dtable4G
key freq value
================================
thanks for the 612 support
thanks for the 380 drink
thanks for the 215 payment
thanks for the 27 encouragement
have a great 154 day
have a great 132 weekend
have a great 54 week
have a great 42 time
have a great 19 night
at the same 346 time
at the same 57 damn
at the same 30 pace
at the same 11 speed
at the same 7 level
at the same 1 rate
I tried the code
dtable4G[ , max(freq), by = key]
and
dtable4G[ , .I[which.max(freq)] , by = key]
Both the above commands, I am getting the same result:
key V1
====================
thanks for the 612
have a great 154
at the same 346
I want the result to be:
key freq value
================================
thanks for the 612 support
have a great 154 day
at the same 346 time
Any ideas what I am doing wrong?
EDITED
dtable4G[dtable4G[, .I[which.max(freq)], by = key]$V1]
worked for me. Though it took some time to run through my 5.4 mil rows.
But this was way faster than using
dtable4G[,.SD[which.max(freq)],by=key]
Reference: With data.table, is SD[which.max(Var1)] the fastest way to find the max of a group?

We can subset the data table for only the max freq of each key column value with the following:
dtable4G[,.SD[which.max(freq)],by=key]
For better performance you can use the below approach as well. It doesn't construct the .SD and is thus faster:
dtable4g[dtable4g[, .I[which.max(freq)], by = key]$V1]

Related

Rolling subset of data frame within for loop in R

Big picture explanation is I am trying to do a sliding window analysis on environmental data in R. I have PAR (photosynthetically active radiation) data for a select number of sequential dates (pre-determined based off other biological factors) for two years (2014 and 2015) with one value of PAR per day. See below the few first lines of the data frame (data frame name is "rollingpar").
par14 par15
1356.3242 1306.7725
NaN 1232.5637
1349.3519 505.4832
NaN 1350.4282
1344.9306 1344.6508
NaN 1277.9051
989.5620 NaN
I would like to create a loop (or any other way possible) to subset the data frame (both columns!) into two week windows (14 rows) from start to finish sliding from one window to the next by a week (7 rows). So the first window would include rows 1 to 14 and the second window would include rows 8 to 21 and so forth. After subsetting, the data needs to be flipped in structure (currently using the melt function in the reshape2 package) so that the values of the PAR data are in one column and the variable of par14 or par15 is in the other column. Then I need to get rid of the NaN data and finally perform a wilcox rank sum test on each window comparing PAR by the variable year (par14 or par15). Below is the code I wrote to prove the concept of what I wanted and for the first subsetted window it gives me exactly what I want.
library(reshape2)
par.sub=rollingpar[1:14, ]
par.sub=melt(par.sub)
par.sub=na.omit(par.sub)
par.sub$variable=as.factor(par.sub$variable)
wilcox.test(value~variable, par.sub)
#when melt flips a data frame the columns become value and variable...
#for this case value holds the PAR data and variable holds the year
#information
When I tried to write a for loop to iterate the process through the whole data frame (total rows = 139) I got errors every which way I ran it. Additionally, this loop doesn't even take into account the sliding by one week aspect. I figured if I could just figure out how to get windows and run analysis via a loop first then I could try to parse through the sliding part. Basically I realize that what I explained I wanted and what I wrote this for loop to do are slightly different. The code below is sliding row by row or on a one day basis. I would greatly appreciate if the solution encompassed the sliding by a week aspect. I am fairly new to R and do not have extensive experience with for loops so I feel like there is probably an easy fix to make this work.
wilcoxvalues=data.frame(p.values=numeric(0))
Upar=rollingpar$par14
for (i in 1:length(Upar)){
par.sub=rollingpar[[i]:[i]+13, ]
par.sub=melt(par.sub)
par.sub=na.omit(par.sub)
par.sub$variable=as.factor(par.sub$variable)
save.sub=wilcox.test(value~variable, par.sub)
for (j in 1:length(save.sub)){
wilcoxvalues$p.value[j]=save.sub$p.value
}
}
If anyone has a much better way to do this through a different package or function that I am unaware of I would love to be enlightened. I did try roll apply but ran into problems with finding a way to apply it to an entire data frame and not just one column. I have searched for assistance from the many other questions regarding subsetting, for loops, and rolling analysis, but can't quite seem to find exactly what I need. Any help would be appreciated to a frustrated grad student :) and if I did not provide enough information please let me know.
Consider an lapply using a sequence of every 7 values through 365 days of year (last day not included to avoid single day in last grouping), all to return a dataframe list of Wilcox test p-values with Week indicator. Then later row bind each list item into final, single dataframe:
library(reshape2)
slidingWindow <- seq(1,364,by=7)
slidingWindow
# [1] 1 8 15 22 29 36 43 50 57 64 71 78 85 92 99 106 113 120 127
# [20] 134 141 148 155 162 169 176 183 190 197 204 211 218 225 232 239 246 253 260
# [39] 267 274 281 288 295 302 309 316 323 330 337 344 351 358
# LIST OF WILCOX P VALUES DFs FOR EACH SLIDING WINDOW (TWO-WEEK PERIODS)
wilcoxvalues <- lapply(slidingWindow, function(i) {
par.sub=rollingpar[i:(i+13), ]
par.sub=melt(par.sub)
par.sub=na.omit(par.sub)
par.sub$variable=as.factor(par.sub$variable)
data.frame(week=paste0("Week: ", i%/%7+1, "-", i%/%7+2),
p.values=wilcox.test(value~variable, par.sub)$p.value)
})
# SINGLE DF OF ALL P-VALUES
wilcoxdf <- do.call(rbind, wilcoxvalues)

Sum row values based on previous ones

I'll try to be specific: I want to create a new column on a data frame in which the values are the sum of the previous values in another column.
So I already have the first two columns (ID and Value) below and want to create the third one (Sum), but I don't know how to do this.
In the column "Sum", the values are the sum of the values in "Value), so for example, 31.098 (Sum) is the sum of 16.91 and 14.18 (Value):
ID Value Sum
157 16.91531834 16.91531834
142 14.18365203 31.09897037
205 11.93528052 43.03425089
89 11.83021643 54.86446732
53 6.3668838 61.23135112
204 3.99243539 65.22378651
202 3.21496113 68.43874764
17 1.93317924 70.37192688
220 1.74406388 72.11599076
147 1.59697415 73.71296491
33 1.42887161 75.14183652
138 1.28178189 76.42361841
154 1.19773062 77.62134903
It is the first time I'm posting here. Until now I found everything I was searching for already answered... so, sorry if this kind of question is already answered too (I must have been!), but I wasn't able to find. I'm not a native speaker (as you probably guessed already), so maybe I didn't use the proper key words...
Thanks!!

selecting consecutive answers in R

I have data set as follows (it is just a sample below):
dataframe<-data.frame("id" = c(1,2,5,7,9,21,22,23),"questionfk"=c(145,51,51,145,145,51,145,51))
In this data id represents the order of the questions. Questionfk, is the question id.
I would like to filter this data on questionfk 145 and 51, where 145 is asked right before 51 was the second question after. So what I want in the end seems like below:
dataframefiltered<-data.frame("id" = c(1,2,22,23),"questionfk"=c(145,51,145,51))
I did this with lots of if's and for's is it possible to do this with data.table? and How?
Thank you!
May be this helps
library(data.table)
setDT(dataframe)[dataframe[, {indx=which(c(TRUE, questionfk[-1]==145 &
questionfk[-.N]==51) & c(TRUE, diff(id)==1))
sort(c(indx, indx+1))}]]
# id questionfk
#1: 1 145
#2: 2 51
#3: 22 145
#4: 23 51
I'm not sure I understand the exact conditions you're looking for, but I'm basing this on wanting to select questions 145 and 51, but only when then come consecutively in that order. I realize that this does not give the same result as you show, but presumably you can modify this to match the right conditions.
Rather than data.table, here's a way to do it with dplyr (which is also fast with big datasets, and very elegant):
dataframe %>%
mutate(last_question = lag(questionfk),
next_question = lead(questionfk),
after_145 = last_question==145,
before_51 = next_question==51) %>%
filter(after_145 | before_51) %>%
select(id, questionfk)

Combining data in R based on a characteristic of that data [duplicate]

This question already has answers here:
Aggregate data in R
(3 answers)
Closed 8 years ago.
Suppose I have data for a number of transaction that occur within different states
State Cost
AK, 70
AK, 75
AK, 10
IL, 20
IL, 1050
IL, 235
etc...
How can I compress my data so that I'm only looking at total cost per state? I can only come up with solutions by writing python scripts to compress this data but it seems like R should be able to support this operation.
State Cost
AK, 155
IL, 1305
etc...
Any ideas are greatly appreciated.
library("dplyr")
options(digits=4)
StatsByState <- group_by(Your.df, State)
summarise(StatsByState, Sum = sum(Cost), Mean = mean(Cost), StDev = sd(Cost))
options(digits=7)
State Sum Mean StDev
1 AK 155 51.67 36.17
2 IL 1040 346.67 565.80
3 NE 720 240.00 242.49

How to optimize for loops in extremely large dataframe

I have a dataframe "x" with 5.9 million rows and 4 columns: idnumber/integer, compdate/integer and judge/character,, representing individual cases completed in an administrative court. The data was imported from a stata dataset and the date field came in as integer, which is fine for my purposes. I want to create the caseload variable by calculating the number of cases completed by the judge within the 30 day window of the completion date of the case at issue.
here are the first 34 rows of data:
idnumber compdate judge
1 9615 JVC
2 15316 BAN
3 15887 WLA
4 11968 WFN
5 15001 CLR
6 13914 IEB
7 14760 HSD
8 11063 RJD
9 10948 PPL
10 16502 BAN
11 15391 WCP
12 14587 LRD
13 10672 RTG
14 11864 JCW
15 15071 GMR
16 15082 PAM
17 11697 DLK
18 10660 ADP
19 13284 ECC
20 13052 JWR
21 15987 MAK
22 10105 HEA
23 14298 CLR
24 18154 MMT
25 10392 HEA
26 10157 ERH
27 9188 RBR
28 12173 JCW
29 10234 PAR
30 10437 ADP
31 11347 RDW
32 14032 JTZ
33 11876 AMC
34 11470 AMC
Here's what I came up with. So for each record I'm taking a subset of the data for that particular judge and then subsetting the cases decided in the 30 day window, and then assigning the length of a vector in the subsetted dataframe to the caseload variable for the subject case, as follows:
for(i in 1:length(x$idnumber)){
e<-x$compdate[i]
f<-e-29
a<-x[x$judge==x$judge[i] & !is.na(x$compdate),]
b<-a[a$compdate<=e & a$compdate>=f,]
x$caseload[i]<-length(b$idnumber)
}
It is working but it is taking extremely long to complete. How can I optimize this or do this easier. Sorry I'm very new to r and to programming -- I'm a law professor trying to analyze court data.... Your help is appreciated. Thanks.
Ken
You don't have to loop through every row. You can do operations on the entire column at once. First, create some data:
# Create some data.
n<-6e6 # cases
judges<-apply(combn(LETTERS,3),2,paste0,collapse='') # About 2600 judges
set.seed(1)
x<-data.frame(idnumber=1:n,judge=sample(judges,n,replace=TRUE),compdate=Sys.Date()+round(runif(n,1,120)))
Now, you can make a rolling window function, and run it on each judge.
# Sort
x<-x[order(x$judge,x$compdate),]
# Create a little rolling window function.
rolling.window<-function(y,window=30) seq_along(y) - findInterval(y-window,y)
# Run the little function on each judge.
x$workload<-unlist(by(x$compdate,x$judge,rolling.window)))
I don't have much experience with rolling calculations, but...
Calculate this per-day, not per-case (since it will be the same for cases on the same day).
Calculate a cumulative sum of the number of cases, and then take the difference of the current value of this sum and the value of the sum 31 days ago (or min{daysAgo:daysAgo>30} since cases are not resolved every day).
It's probably fastest to use a data.table. This is my attempt, using #nograpes simulated data. Comments start with #.
require(data.table)
DT <- data.table(x)
DT[,compdate:=as.integer(compdate)]
setkey(DT,judge,compdate)
# count cases for each day
ldt <- DT[,.N,by='judge,compdate']
# cumulative sum of counts
ldt[,nrun:=cumsum(N),by=judge]
# see how far to look back
ldt[,lookbk:=sapply(1:.N,function(i){
z <- compdate[i]-compdate[i:1]
older <- which(z>30)
if (length(older)) min(older)-1L else as(NA,'integer')
}),by=judge]
# compute cumsum(today) - cumsum(more than 30 days ago)
ldt[,wload:=list(sapply(1:.N,function(i)
nrun[i]-ifelse(is.na(lookbk[i]),0,nrun[i-lookbk[i]])
))]
On my laptop, this takes under a minute. Run this command to see the output for one judge:
print(ldt['XYZ'],nrow=120)

Resources