Why does subsetting the dataframe not work in this case? [duplicate] - r

This question already has answers here:
Select rows from a data frame based on values in a vector
(3 answers)
Closed 1 year ago.
I want to subset a dataframe with .id values specified but it gives me this error:
Warning in .id == c(3, 5:12, 14, 20:64, 66:72, 75, 78:79, 81:111, 113:136, :
longer object length is not a multiple of shorter object length
when using this code:
newdatarev = subset(newdata, .id == c(3,5:12,14,20:64,66:72,75,78:79,81:111,113:136,138:149,151:160,
162:183,185:225,227:233,235:247,249,251:264,266:328,330:364,366:383,
385:411,413:471,473:490,492:580,582:598,600:603,605:606,608:619,621:646,
648:686,688:718,720:746,748,750:753,755:762,764:861,863:875,877:894,
897:911,913:914,916:926,928:941))
For reference, here is a small bit of newdata:
> newdata
.id V1 V2
1: 1 -2.870109 8273.632
2: 1 4.829891 8273.632
3: 1 21.329891 8279.132
4: 1 25.729891 8281.332
5: 1 32.329891 8285.732
---
17937: 941 1834.113417 1411.605
17938: 941 1818.713417 1392.905
17939: 941 1814.313417 1386.305
17940: 941 1814.313417 1364.305
17941: 941 1828.613417 1224.605
I have a feeling it has to do with how .id is structured and me using the code interferes with how it interprets the rows vs. .id values that overlap. It does get me a result of a very strange recollection of data here:
> newdatarev
.id V1 V2
1: 55 158.8030 2045.753
2: 100 227.7387 8250.454
3: 153 356.8675 1383.835
4: 205 483.6464 3946.844
5: 299 635.8744 8387.862
6: 347 722.9303 5147.715
7: 393 850.1742 2115.559
8: 439 857.9288 8243.071
9: 482 926.5706 1608.928
10: 532 1107.8380 2616.635
11: 632 1234.6482 4957.055
12: 633 1201.8700 3252.570
13: 683 1315.2215 2068.050
14: 684 1325.5905 6253.692
15: 734 1414.3443 2267.337
16: 784 1551.0153 5184.641
17: 831 1634.2056 7159.362
18: 880 1724.5570 5726.908
19: 933 1879.6398 3465.536
Thank you in advance!

The == operator tests one condition against one other condition. What you want is to test several conditions all at once. This can be done with the %in% infix operator:
newdatarev <- subset(newdata, .id %in% c(3,5:12,14,20:64,66:72,75,78:79,81:111,113:136,138:149,151:160,
162:183,185:225,227:233,235:247,249,251:264,266:328,330:364,366:383,
385:411,413:471,473:490,492:580,582:598,600:603,605:606,608:619,621:646,
648:686,688:718,720:746,748,750:753,755:762,764:861,863:875,877:894,
897:911,913:914,916:926,928:941))

Related

Adding a second column as a function of first with data.table's (`:=`) [duplicate]

I want to create a new data.table or maybe just add some columns to a data.table. It is easy to specify multiple new columns but what happens if I want a third column to calculate a value based on one of the columns I am creating. I think plyr package can do something such as that. Can we perform such iterative (sequential) column creation in data.table?
I want to do as follows
dt <- data.table(shop = 1:10, income = 10:19*70)
dt[ , list(hope = income * 1.05, hopemore = income * 1.20, hopemorerealistic = hopemore - 100)]
or maybe
dt[ , `:=`(hope = income*1.05, hopemore = income*1.20, hopemorerealistic = hopemore-100)]
You can also use <- within the call to list eg
DT <- data.table(a=1:5)
DT[, c('b','d') := list(b1 <- a*2, b1*3)]
DT
a b d
1: 1 2 6
2: 2 4 12
3: 3 6 18
4: 4 8 24
5: 5 10 30
Or
DT[, `:=`(hope = hope <- a+1, z = hope-1)]
DT
a b d hope z
1: 1 2 6 2 1
2: 2 4 12 3 2
3: 3 6 18 4 3
4: 4 8 24 5 4
5: 5 10 30 6 5
It is possible by using curly braces and semicolons in j
There are multiple ways to go about it, here are two examples:
# If you simply want to output:
dt[ ,
{hope=income*1.05;
hopemore=income*1.20;
list(hope=hope, hopemore=hopemore, hopemorerealistic=hopemore-100)}
]
# if you want to save the values
dt[ , c("hope", "hopemore", "hopemorerealistic") :=
{hope=income*1.05;
hopemore=income*1.20;
list(hope, hopemore, hopemore-100)}
]
dt
# shop income hope hopemore hopemorerealistic
# 1: 1 700 735.0 840 740
# 2: 2 770 808.5 924 824
# 3: 3 840 882.0 1008 908
# 4: 4 910 955.5 1092 992
# 5: 5 980 1029.0 1176 1076
# 6: 6 1050 1102.5 1260 1160
# 7: 7 1120 1176.0 1344 1244
# 8: 8 1190 1249.5 1428 1328
# 9: 9 1260 1323.0 1512 1412
# 10: 10 1330 1396.5 1596 1496

Issue with structure of data.frame for sunburstR plot

Hello,
I created the dataframe below, based on the example in the sunburstR documentation.
Column Count
1: ACTIVE 68764
2: INACTIVE 73599
3: ACTIVE-RESIDENT 68279
4: ACTIVE-NONRESIDENT 485
5: INACTIVE-RESIDENT 63378
6: INACTIVE-NONRESIDENT 10221
7: ACTIVE-RESIDENT-LATIN 55
8: ACTIVE-RESIDENT-CYRLIC 68224
9: ACTIVE-NONRESIDENT-LATIN 465
10: ACTIVE-NONRESIDENT-CYRLIC 20
11: INACTIVE-RESIDENT-LATIN 114
12: INACTIVE-RESIDENT-CYRLIC 63264
13: INACTIVE-NONRESIDENT-LATIN 7915
14: INACTIVE-NONRESIDENT-CYRLIC 2306
The first column is character, the second is integer.
However when I try to plot it, I get nothing.
sunburst(sunburst_data)
Any hints whats wrong with the structure of my dataframe?
Include only the leaf nodes in your data frame...
df <- read.table(text = '
Column Count
ACTIVE-RESIDENT-LATIN 55
ACTIVE-RESIDENT-CYRLIC 68224
ACTIVE-NONRESIDENT-LATIN 465
ACTIVE-NONRESIDENT-CYRLIC 20
INACTIVE-RESIDENT-LATIN 114
INACTIVE-RESIDENT-CYRLIC 63264
INACTIVE-NONRESIDENT-LATIN 7915
INACTIVE-NONRESIDENT-CYRLIC 2306
')
library(sunburstR)
sunburst(df)

Aggregate column intervals into new columns in data.table

I would like to aggregate a data.table based on intervals of a column (time). The idea here is that each interval should be a separate column with a different name in the output.
I've seen a similar question in SO but I couldn't get my head around the problem. help?
reproducible example
library(data.table)
# sample data
set.seed(1L)
dt <- data.table( id= sample(LETTERS,50,replace=TRUE),
time= sample(60,50,replace=TRUE),
points= sample(1000,50,replace=TRUE))
# simple summary by `id`
dt[, .(total = sum(points)), by=id]
> id total
> 1: J 2058
> 2: T 1427
> 3: C 1020
In the desired output, each column would be named after the interval size they originate from. For example with three intervals, say time < 10, time < 20, time < 30, the head of the output should be:
id | total | subtotal_under10 | subtotal_under20 | subtotal_under30
Exclusive Subtotal Categories
set.seed(1L);
N <- 50L;
dt <- data.table(id=sample(LETTERS,N,T),time=sample(60L,N,T),points=sample(1000L,N,T));
breaks <- seq(0L,as.integer(ceiling((max(dt$time)+1L)/10)*10),10L);
cuts <- cut(dt$time,breaks,labels=paste0('subtotal_under',breaks[-1L]),right=F);
res <- dcast(dt[,.(subtotal=sum(points)),.(id,cut=cuts)],id~cut,value.var='subtotal');
res <- res[dt[,.(total=sum(points)),id]][order(id)];
res;
## id subtotal_under10 subtotal_under20 subtotal_under30 subtotal_under40 subtotal_under50 subtotal_under60 total
## 1: A NA NA 176 NA NA 512 688
## 2: B NA NA 599 NA NA NA 599
## 3: C 527 NA NA NA NA NA 527
## 4: D NA NA 174 NA NA NA 174
## 5: E NA 732 643 NA NA NA 1375
## 6: F 634 NA NA NA NA 1473 2107
## 7: G NA NA 1410 NA NA NA 1410
## 8: I NA NA NA NA NA 596 596
## 9: J 447 NA 640 NA NA 354 1441
## 10: K 508 NA NA NA NA 454 962
## 11: M NA 14 1358 NA NA NA 1372
## 12: N NA NA NA NA 730 NA 730
## 13: O NA NA 271 NA NA 259 530
## 14: P NA NA NA NA 78 NA 78
## 15: Q 602 NA 485 NA 925 NA 2012
## 16: R NA 599 357 479 NA NA 1435
## 17: S NA 986 716 865 NA NA 2567
## 18: T NA NA NA NA 105 NA 105
## 19: U NA NA NA 239 1163 641 2043
## 20: V NA 683 NA NA 929 NA 1612
## 21: W NA NA NA NA 229 NA 229
## 22: X 214 993 NA NA NA NA 1207
## 23: Y NA 130 992 NA NA NA 1122
## 24: Z NA NA NA NA 104 NA 104
## id subtotal_under10 subtotal_under20 subtotal_under30 subtotal_under40 subtotal_under50 subtotal_under60 total
Cumulative Subtotal Categories
I've come up with a new solution based on the requirement of cumulative subtotals.
My objective was to avoid looping operations such as lapply(), since I realized that it should be possible to compute the desired result using only vectorized operations such as findInterval(), vectorized/cumulative operations such as cumsum(), and vector indexing.
I succeeded, but I should warn you that the algorithm is fairly intricate, in terms of its logic. I'll try to explain it below.
breaks <- seq(0L,as.integer(ceiling((max(dt$time)+1L)/10)*10),10L);
ints <- findInterval(dt$time,breaks);
res <- dt[,{ y <- ints[.I]; o <- order(y); y <- y[o]; w <- which(c(y[-length(y)]!=y[-1L],T)); v <- rep(c(NA,w),diff(c(1L,y[w],length(breaks)))); c(sum(points),as.list(cumsum(points[o])[v])); },id][order(id)];
setnames(res,2:ncol(res),c('total',paste0('subtotal_under',breaks[-1L])));
res;
## id total subtotal_under10 subtotal_under20 subtotal_under30 subtotal_under40 subtotal_under50 subtotal_under60
## 1: A 688 NA NA 176 176 176 688
## 2: B 599 NA NA 599 599 599 599
## 3: C 527 527 527 527 527 527 527
## 4: D 174 NA NA 174 174 174 174
## 5: E 1375 NA 732 1375 1375 1375 1375
## 6: F 2107 634 634 634 634 634 2107
## 7: G 1410 NA NA 1410 1410 1410 1410
## 8: I 596 NA NA NA NA NA 596
## 9: J 1441 447 447 1087 1087 1087 1441
## 10: K 962 508 508 508 508 508 962
## 11: M 1372 NA 14 1372 1372 1372 1372
## 12: N 730 NA NA NA NA 730 730
## 13: O 530 NA NA 271 271 271 530
## 14: P 78 NA NA NA NA 78 78
## 15: Q 2012 602 602 1087 1087 2012 2012
## 16: R 1435 NA 599 956 1435 1435 1435
## 17: S 2567 NA 986 1702 2567 2567 2567
## 18: T 105 NA NA NA NA 105 105
## 19: U 2043 NA NA NA 239 1402 2043
## 20: V 1612 NA 683 683 683 1612 1612
## 21: W 229 NA NA NA NA 229 229
## 22: X 1207 214 1207 1207 1207 1207 1207
## 23: Y 1122 NA 130 1122 1122 1122 1122
## 24: Z 104 NA NA NA NA 104 104
## id total subtotal_under10 subtotal_under20 subtotal_under30 subtotal_under40 subtotal_under50 subtotal_under60
Explanation
breaks <- seq(0L,as.integer(ceiling((max(dt$time)+1L)/10)*10),10L);
breaks <- seq(0,ceiling(max(dt$time)/10)*10,10); ## old derivation, for reference
First, we derive breaks as before. I should mention that I realized there was a subtle bug in my original derivation algorithm. Namely, if the maximum time value is a multiple of 10, then the derived breaks vector would've been short by 1. Consider if we had a maximum time value of 60. The original calculation of the upper limit of the sequence would've been ceiling(60/10)*10, which is just 60 again. But it should be 70, since the value 60 technically belongs in the 60 <= time < 70 interval. I fixed this in the new code (and retroactively amended the old code) by adding 1 to the maximum time value when computing the upper limit of the sequence. I also changed two of the literals to integers and added an as.integer() coercion to preserve integerness.
ints <- findInterval(dt$time,breaks);
Second, we precompute the interval indexes into which each time value falls. We can precompute this once for the entire table, because we'll be able to index out each id group's subset within the j argument of the subsequent data.table indexing operation. Note that findInterval() behaves perfectly for our purposes using the default arguments; we don't need to mess with rightmost.closed, all.inside, or left.open. This is because findInterval() by default uses lower <= value < upper logic, and it's impossible for values to fall below the lowest break (which is zero) or on or above the highest break (which must be greater than the maximum time value because of the way we derived it).
res <- dt[,{ y <- ints[.I]; o <- order(y); y <- y[o]; w <- which(c(y[-length(y)]!=y[-1L],T)); v <- rep(c(NA,w),diff(c(1L,y[w],length(breaks)))); c(sum(points),as.list(cumsum(points[o])[v])); },id][order(id)];
Third, we compute the aggregation using a data.table indexing operation, grouping by id. (Afterward we sort by id using a chained indexing operation, but that's not significant.) The j argument consists of 6 statements executed in a braced block which I will now explain one at a time.
y <- ints[.I];
This pulls out the interval indexes for the current id group in input order.
o <- order(y);
This captures the order of the group's records by interval. We will need this order for the cumulative summation of points, as well as the derivation of which indexes in that cumulative sum represent the desired interval subtotals. Note that the within-interval orders (i.e. ties) are irrelevant, since we're only going to extract the final subtotals of each interval, which will be the same regardless if and how order() breaks ties.
y <- y[o];
This actually reorders y to interval order.
w <- which(c(y[-length(y)]!=y[-1L],T));
This computes the endpoints of each interval sequence, IOW the indexes of only those elements that comprise the final element of an interval. This vector will always contain at least one index, it will never contain more indexes than there are intervals, and it will be unique.
v <- rep(c(NA,w),diff(c(1L,y[w],length(breaks))));
This repeats each element of w according to its distance (as measured in intervals) from its following element. We use diff() on y[w] to compute these distances, requiring an appended length(breaks) element to properly treat the final element of w. We also need to cover if the first interval (and zero or more subsequent intervals) is not represented in the group, in which case we must pad it with NAs. This requires prepending an NA to w and prepending a 1 to the argument vector to diff().
c(sum(points),as.list(cumsum(points[o])[v]));
Finally, we can compute the group aggregation result. Since you want a total column and then separate subtotal columns, we need a list starting with the total aggregation, followed by one list component per subtotal value. points[o] gives us the target summation operand in interval order, which we then cumulatively sum, and then index with v to produce the correct sequence of cumulative subtotals. We must coerce the vector to a list using as.list(), and then prepend the list with the total aggregation, which is simply the sum of the entire points vector. The resulting list is then returned from the j expression.
setnames(res,2:ncol(res),c('total',paste0('subtotal_under',breaks[-1L])));
Last, we set the column names. It is more performant to set them once after-the-fact, as opposed to having them set repeatedly in the j expression.
Benchmarking
For benchmarking, I wrapped my code in a function, and did the same for Mike's code. I decided to make my breaks variable a parameter with its derivation as the default argument, and I did the same for Mike's my_nums variable, but without a default argument.
Also note that for the identical() proofs-of-equivalence, I coerce the two results to matrix, because Mike's code always computes the total and subtotal columns as doubles, whereas my code preserves the type of the input points column (i.e. integer if it was integer, double if it was double). Coercing to matrix was the easiest way I could think of to verify that the actual data is equivalent.
library(data.table);
library(microbenchmark);
bgoldst <- function(dt,breaks=seq(0L,as.integer(ceiling((max(dt$time)+1L)/10)*10),10L)) { ints <- findInterval(dt$time,breaks); res <- dt[,{ y <- ints[.I]; o <- order(y); y <- y[o]; w <- which(c(y[-length(y)]!=y[-1L],T)); v <- rep(c(NA,w),diff(c(1L,y[w],length(breaks)))); c(sum(points),as.list(cumsum(points[o])[v])); },id][order(id)]; setnames(res,2:ncol(res),c('total',paste0('subtotal_under',breaks[-1L]))); res; };
mike <- function(dt,my_nums) { cols <- sapply(1:length(my_nums),function(x){return(paste0("subtotal_under",my_nums[x]))}); dt[,(cols) := lapply(my_nums,function(x) ifelse(time<x,points,NA))]; dt[,total := points]; dt[,lapply(.SD,function(x){ if (all(is.na(x))){ as.numeric(NA) } else{ as.numeric(sum(x,na.rm=TRUE)) } }),by=id, .SDcols=c("total",cols) ][order(id)]; };
## OP's sample input
set.seed(1L);
N <- 50L;
dt <- data.table(id=sample(LETTERS,N,T),time=sample(60L,N,T),points=sample(1000L,N,T));
identical(as.matrix(bgoldst(copy(dt))),as.matrix(mike(copy(dt),c(10,20,30,40,50,60))));
## [1] TRUE
microbenchmark(bgoldst(copy(dt)),mike(copy(dt),c(10,20,30,40,50,60)));
## Unit: milliseconds
## expr min lq mean median uq max neval
## bgoldst(copy(dt)) 3.281380 3.484301 3.793532 3.588221 3.780023 6.322846 100
## mike(copy(dt), c(10, 20, 30, 40, 50, 60)) 3.243746 3.442819 3.731326 3.526425 3.702832 5.618502 100
Mike's code is actually faster (usually) by a small amount for the OP's sample input.
## large input 1
set.seed(1L);
N <- 1e5L;
dt <- data.table(id=sample(LETTERS,N,T),time=sample(60L,N,T),points=sample(1000L,N,T));
identical(as.matrix(bgoldst(copy(dt))),as.matrix(mike(copy(dt),c(10,20,30,40,50,60,70))));
## [1] TRUE
microbenchmark(bgoldst(copy(dt)),mike(copy(dt),c(10,20,30,40,50,60,70)));
## Unit: milliseconds
## expr min lq mean median uq max neval
## bgoldst(copy(dt)) 19.44409 19.96711 22.26597 20.36012 21.26289 62.37914 100
## mike(copy(dt), c(10, 20, 30, 40, 50, 60, 70)) 94.35002 96.50347 101.06882 97.71544 100.07052 146.65323 100
For this much larger input, my code significantly outperforms Mike's.
In case you're wondering why I had to add the 70 to Mike's my_nums argument, it's because with so many more records, the probability of getting a 60 in the random generation of dt$time is extremely high, which requires the additional interval. You can see that the identical() call gives TRUE, so this is correct.
## large input 2
set.seed(1L);
N <- 1e6L;
dt <- data.table(id=sample(LETTERS,N,T),time=sample(60L,N,T),points=sample(1000L,N,T));
identical(as.matrix(bgoldst(copy(dt))),as.matrix(mike(copy(dt),c(10,20,30,40,50,60,70))));
## [1] TRUE
microbenchmark(bgoldst(copy(dt)),mike(copy(dt),c(10,20,30,40,50,60,70)));
## Unit: milliseconds
## expr min lq mean median uq max neval
## bgoldst(copy(dt)) 204.8841 207.2305 225.0254 210.6545 249.5497 312.0077 100
## mike(copy(dt), c(10, 20, 30, 40, 50, 60, 70)) 1039.4480 1086.3435 1125.8285 1116.2700 1158.4772 1412.6840 100
For this even larger input, the performance difference is slightly more pronounced.
I'm pretty sure something like this might work as well:
# sample data
set.seed(1)
dt <- data.table( id= sample(LETTERS,50,replace=TRUE),
time= sample(60,50,replace=TRUE),
points= sample(1000,50,replace=TRUE))
#Input numbers
my_nums <- c(10,20,30)
#Defining columns
cols <- sapply(1:length(my_nums),function(x){return(paste0("subtotal_under",my_nums[x]))})
dt[,(cols) := lapply(my_nums,function(x) ifelse(time<x,points,NA))]
dt[,total := sum((points)),by=id]
dt[,(cols):= lapply(.SD,sum,na.rm=TRUE),by=id, .SDcols=cols ]
head(dt)
id time points subtotal_under10 subtotal_under20 subtotal_under30 total
1: G 29 655 0 0 1410 1410
2: J 52 354 447 447 1087 1441
3: O 27 271 0 0 271 530
4: X 15 993 214 1207 1207 1207
5: F 5 634 634 634 634 2107
6: X 6 214 214 1207 1207 1207
Edit: To aggregate columns, you can simply change to:
#Defining columns
cols <- sapply(1:length(my_nums),function(x){return(paste0("subtotal_under",my_nums[x]))})
dt[,(cols) := lapply(my_nums,function(x) ifelse(time<x,points,NA))]
dt[,total := points]
dt[,lapply(.SD,function(x){
if (all(is.na(x))){
as.numeric(NA)
} else{
as.numeric(sum(x,na.rm=TRUE))
}
}),by=id, .SDcols=c("total",cols) ]
This should give the expected output of 1 row per ID.
Edit: Per OPs comment below, changed so that 0s are NA. Changed so don't need an as.numeric() call in the building of columns.
After a while thinking about this, I think I've arrived at a very simple and fast solution based on conditional sum ! The small problem is that I haven't figured out how to automate this code to create a larger number of columns without having to write each of them. Any help here would be really welcomed !
library(data.table)
dt[, .( total = sum(points)
, subtotal_under10 = sum(points[which( time < 10)])
, subtotal_under20 = sum(points[which( time < 20)])
, subtotal_under30 = sum(points[which( time < 30)])
, subtotal_under40 = sum(points[which( time < 40)])
, subtotal_under50 = sum(points[which( time < 50)])
, subtotal_under60 = sum(points[which( time < 60)])), by=id][order(id)]
microbenchmark
Using the same benchmark proposed by #bgoldst in another answer, this simple solution is much faster than the alternatives:
set.seed(1L)
N <- 1e6L
dt <- data.table(id=sample(LETTERS,N,T),time=sample(60L,N,T),points=sample(1000L,N,T))
library(microbenchmark)
microbenchmark(rafa(copy(dt)),bgoldst(copy(dt)),mike(copy(dt),c(10,20,30,40,50,60)))
# expr min lq mean median uq max neval cld
# rafa(copy(dt)) 95.79 102.45 117.25 110.09 116.95 278.50 100 a
# bgoldst(copy(dt)) 192.53 201.85 211.04 207.50 213.26 354.17 100 b
# mike(copy(dt), c(10, 20, 30, 40, 50, 60)) 844.80 890.53 955.29 921.27 1041.96 1112.18 100 c

Demean R data.table: list of columns

I want to demean a whole data.table object (or just a list of many columns of it) by groups.
Here's my approach so far:
setkey(myDt, groupid)
for (col in colnames(wagesOfFired)){
myDt[, paste(col, 'demeaned', sep='.') := col - mean(col), with=FALSE]
}
which gives
Error in col - mean(col) : non-numeric argument to binary operator
Here's some sample data. In this simple case, there's only two columns, but I typically have so many columns such that I want to iterate over a list
y groupid x
1: 3.46000 51557094 97
2: 111.60000 51557133 25
3: 29.36000 51557133 23
4: 96.38000 51557133 9
5: 65.22000 51557193 32
6: 66.05891 51557328 10
7: 9.74000 51557328 180
8: 61.59000 51557328 18
9: 9.99000 51557328 18
10: 89.68000 51557420 447
11: 129.24436 51557429 15
12: 3.46000 51557638 3943
13: 117.36000 51557642 11
14: 9.51000 51557653 83
15: 68.16000 51557653 518
16: 96.38000 51557653 14
17: 9.53000 51557678 18
18: 7.96000 51557801 266
19: 51.88000 51557801 49
20: 10.70000 51558040 1034
The problem is that col is a string, so col-mean(col) cannot be computed.
myNames <- names(myDt)
myDt[,paste(myNames,"demeaned",sep="."):=
lapply(.SD,function(x)x-mean(x)),
by=groupid,.SDcols=myNames]
Comments:
You don't need to set a key.
It's in one operation because using [ repeatedly can be slow.
You can change myNames to some subset of the column names.

finding largest consecutive region in table

I'm trying to find regions in a file that have consecutive lines based on two columns. I want to find the largest span of consecutive values. If column 4 (V3) comes immediately before the second line's value for column 3 (V2), then write the output for the longest span of consecutive values.
The input looks like this. input:
> x
grp V1 V2 V3 V4 V5 V6
1: 1 DOG.1 142 144 132 134 0
2: 2 DOG.1 313 315 303 305 0
3: 3 DOG.1 316 318 306 308 0
4: 4 DOG.1 319 321 309 311 0
5: 5 DOG.1 322 324 312 314 0
the output should look like this:
out.name in out
[1,] "DOG.1" "313" "324"
Notice how the x[1,] was removed and how the output is starting at x[2,3] and ending at x[5,4]. All of these values are consecutive.
One obvious way is to take tail(x$V2, -1L) - head(x$V3, -1L) and get the start and end indices corresponding to the maximum consecutive 1s. But I'll skip it here (and leave it to others) as I'd like to show how this can be done with the help of IRanges package:
require(data.table)
require(IRanges) ## Bioconductor package
x.ir = reduce(IRanges(x$V2, x$V3))
max.idx = which.max(width(x.ir))
ans = data.table(out.name = "DOG.1",
in = start(x.ir)[max.idx],
out = end(x.ir)[max.idx])
# out.name bla out
# 1: DOG.1 313 324

Resources