Running sums for every row of the previous 25 rows - r

I've been trying for a while now to produce a code that brings me a new vector of the sum of the 25 previous rows for an original vector.
So if we say I have a variable Y with 500 rows and I would like a running sum, in a new vector, which contains the sum of rows [1:25] then [2:26] for the length of Y, such as this:
y<-1:500
runsum<-function(x){
cumsum(x)-cumsum(x[26:length(x)])
}
new<-runsum(y)
I've tried using some different functions here and then even using the apply functions on top but none seem to produce the right answers....
Would anyone be able to help? I realise it's probably very easy for many of the community here but any help would be appreciated
Thanks

This function calculates the sum of the 24 preceding values and the actual value:
movsum <- function(x,n=25){filter(x,rep(1,n), sides=1)}
It is easy to adapt to sum only preceding values, if this is what you really want.

In addition to Roland's answer you could use the zoo library
library ( zoo )
y <- 1:500
rollapply ( zoo ( y ), 25, sum )
HTH

I like Roland's answer better as it relies on a time series function and will probably be pretty quick. Since you mentioned you started going down the path of using apply() and friends, here's one approach to do that:
y<-1:500
#How many to sum at a time?
n <- 25
#Create a matrix of the appropriate start and end points
mat <- cbind(start = head(y, -(n-1)), end = tail(y, -(n-1)))
#Check output
rbind(head(mat,3), tail(mat,3))
#-----
start end
1 25
2 26
3 27
[474,] 474 498
[475,] 475 499
[476,] 476 500
#add together
apply(mat, 1, function(x) sum(y[x[1]]:y[x[2]]))
#Is it the same as Roland's answer after removing the NA values it returns?
all.equal(apply(mat, 1, function(x) sum(y[x[1]]:y[x[2]])),
movsum(y)[-c(1:n-1)])
#-----
[1] TRUE

Related

Find gene overlap as a percentage between lists

This question is a variant of the question found here.
I also want to evaluate the percent of gene overlap, but instead of doing a pairwise comparison of all-by-all within a single list, I want to compare one list to a different list set. The original post answer gave an elegant nested sapply, which I don't think will work in my case.
Here are some example data.
>listOfGenes1 <- list("cellLine1" = c("ENSG001", "ENSG002", "ENSG003"), "cellLine2" = c("ENSG003", "ENSG004"), "cellLine3" = c("ENSG004", "ENSG005"))
>myCellLine <- list("myCellLine" = c("ENSG001", "ENSG002", "ENSG003"))
I want to compare each of the cell lines in listOfGenes1 to the single group in myCellLine, with output something like:
>overlaps
cellLine1 cellLine2 cellLine3
100 33 0
To be clear, I would like percent overlap, with "myCellLine" as the denominator. Here is what I was trying so far that didn't work out.
overlaps <- sapply(listOfGenes1, function(g1) {round(length(intersect(g1, myCellLine)) / length(myCellLine) * 100)})
You can try,
round(sapply(listOfGenes1, function(i)
100 * length(intersect(i, myCellLine[[1]])) / length(myCellLine[[1]])), 0)
#cellLine1 cellLine2 cellLine3
# 100 33 0
NOTE: Your myCellLine object is a list, hence length(myCellLine) will not work
We can use sapply
sapply(listOfGenes1, function(x) mean(myCellLine[[1]] %in% x) * 100)
#cellLine1 cellLine2 cellLine3
#100.00000 33.33333 0.00000

correlation loop keep getting NA

Despite using two complete columns where every element is numeric and no numbers are missing for rows 2 thru 570, I find it impossible to get a result other than NA when using a loop to find a rolling 24-week correlation between the two columns.
rolling.correlation <- NULL
temp <- NULL
for(i in 2:547){
temp <- cor(training.set$return.SPY[i:i+23],training.set$return.TLT[i:i+23])
rolling.correlation <- c(rolling.correlation, temp)
} #end "for" loop
rolling.correlation
The cor()command works fine for [2:25], [i:25], or [2:i] but R doesn't understand when I say [i:i+23]
I want R to calculate a correlation for rows 2 thru 25, then 3 thru 26, ..., 547 thru 570. The result should be a vector of length 546 which has numeric values for each correlation. Instead I'm getting a vector of 546 NAs. How can I fix this? Thanks for your help.
Look what happens when you do
5:5+2
# [1] 7
Note that : has a higher operator precedence than + which means 5:5+2 is the same as (5:5)+2 when you really want 5:(5+2). Use
i:(i+23)

Identifying duplicate columns in a dataframe

I'm an R newbie and am attempting to remove duplicate columns from a largish dataframe (50K rows, 215 columns). The frame has a mix of discrete continuous and categorical variables.
My approach has been to generate a table for each column in the frame into a list, then use the duplicated() function to find rows in the list that are duplicates, as follows:
age=18:29
height=c(76.1,77,78.1,78.2,78.8,79.7,79.9,81.1,81.2,81.8,82.8,83.5)
gender=c("M","F","M","M","F","F","M","M","F","M","F","M")
testframe = data.frame(age=age,height=height,height2=height,gender=gender,gender2=gender)
tables=apply(testframe,2,table)
dups=which(duplicated(tables))
testframe <- subset(testframe, select = -c(dups))
This isn't very efficient, especially for large continuous variables. However, I've gone down this route because I've been unable to get the same result using summary (note, the following assumes an original testframe containing duplicates):
summaries=apply(testframe,2,summary)
dups=which(duplicated(summaries))
testframe <- subset(testframe, select = -c(dups))
If you run that code you'll see it only removes the first duplicate found. I presume this is because I am doing something wrong. Can anyone point out where I am going wrong or, even better, point me in the direction of a better way to remove duplicate columns from a dataframe?
How about:
testframe[!duplicated(as.list(testframe))]
You can do with lapply:
testframe[!duplicated(lapply(testframe, summary))]
summary summarizes the distribution while ignoring the order.
Not 100% but I would use digest if the data is huge:
library(digest)
testframe[!duplicated(lapply(testframe, digest))]
A nice trick that you can use is to transpose your data frame and then check for duplicates.
duplicated(t(testframe))
unique(testframe, MARGIN=2)
does not work, though I think it should, so try
as.data.frame(unique(as.matrix(testframe), MARGIN=2))
or if you are worried about numbers turning into factors,
testframe[,colnames(unique(as.matrix(testframe), MARGIN=2))]
which produces
age height gender
1 18 76.1 M
2 19 77.0 F
3 20 78.1 M
4 21 78.2 M
5 22 78.8 F
6 23 79.7 F
7 24 79.9 M
8 25 81.1 M
9 26 81.2 F
10 27 81.8 M
11 28 82.8 F
12 29 83.5 M
It is probably best for you to first find the duplicate column names and treat them accordingly (for example summing the two, taking the mean, first, last, second, mode, etc... To find the duplicate columns:
names(df)[duplicated(names(df))]
What about just:
unique.matrix(testframe, MARGIN=2)
Actually you just would need to invert the duplicated-result in your code and could stick to using subset (which is more readable compared to bracket notation imho)
require(dplyr)
iris %>% subset(., select=which(!duplicated(names(.))))
Here is a simple command that would work if the duplicated columns of your data frame had the same names:
testframe[names(testframe)[!duplicated(names(testframe))]]
If the problem is that dataframes have been merged one time too many using, for example:
testframe2 <- merge(testframe, testframe, by = c('age'))
It is also good to remove the .x suffix from the column names. I applied it here on top of Mostafa Rezaei's great answer:
testframe2 <- testframe2[!duplicated(as.list(testframe2))]
names(testframe2) <- gsub('.x','',names(testframe2))
Since this Q&A is a popular Google search result but the answer is a bit slow for a large matrix I propose a new version using exponential search and data.table power.
This a function I implemented in dataPreparation package.
The function
dataPreparation::which_are_bijection
which_are_in_double(testframe)
Which return 3 and 4 the columns that are duplicated in your example
Build a data set with wanted dimensions for performance tests
age=18:29
height=c(76.1,77,78.1,78.2,78.8,79.7,79.9,81.1,81.2,81.8,82.8,83.5)
gender=c("M","F","M","M","F","F","M","M","F","M","F","M")
testframe = data.frame(age=age,height=height,height2=height,gender=gender,gender2=gender)
for (i in 1:12){
testframe = rbind(testframe,testframe)
}
# Result in 49152 rows
for (i in 1:5){
testframe = cbind(testframe,testframe)
}
# Result in 160 columns
The benchmark
To perform the benchmark, I use the library rbenchmark which will reproduce each computations 100 times
benchmark(
which_are_in_double(testframe, verbose=FALSE),
duplicated(lapply(testframe, summary)),
duplicated(lapply(testframe, digest))
)
test replications elapsed
3 duplicated(lapply(testframe, digest)) 100 39.505
2 duplicated(lapply(testframe, summary)) 100 20.412
1 which_are_in_double(testframe, verbose = FALSE) 100 13.581
So which are bijection 3 to 1.5 times faster than other proposed solutions.
NB 1: I excluded from the benchmark the solution testframe[,colnames(unique(as.matrix(testframe), MARGIN=2))] because it was already 10 times slower with 12k rows.
NB 2: Please note, the way this data set is constructed we have a lot of duplicated columns which reduce the advantage of exponential search. With just a few duplicated columns, one would have much better performance for which_are_bijection and similar performances for other methods.

Apply multiple functions to column using tapply

Could someone please point to how we can apply multiple functions to the same column using tapply (or any other method, plyr, etc) so that the result can be obtained in distinct columns). For eg., if I have a dataframe with
User MoneySpent
Joe 20
Ron 10
Joe 30
...
I want to get the result as sum of MoneySpent + number of Occurences.
I used a function like --
f <- function(x) c(sum(x), length(x))
tapply(df$MoneySpent, df$Uer, f)
But this does not split it into columns, gives something like say,
Joe Joe 100, 5 # The sum=100, number of occurrences = 5, but it gets juxtaposed
Thanks in advance,
Raj
You can certainly do stuff like this using ddply from the plyr package:
dat <- data.frame(x = rep(letters[1:3],3),y = 1:9)
ddply(dat,.(x),summarise,total = NROW(piece), count = sum(y))
x total count
1 a 3 12
2 b 3 15
3 c 3 18
You can keep listing more summary functions, beyond just two, if you like. Note I'm being a little tricky here in calling NROW on an internal variable in ddply called piece. You could have just done something like length(y) instead. (And probably should; referencing the internal variable piece isn't guaranteed to work in future versions, I think. Do as I say, not as I do and just use length().)
ddply() is conceptually the clearest, but sometimes it is useful to use tapply instead for speed reasons, in which case the following works:
do.call( rbind, tapply(df$MoneySpent, df$User, f) )

Vectorize my thinking: Vector Operations in R

So earlier I answered my own question on thinking in vectors in R. But now I have another problem which I can't 'vectorize.' I know vectors are faster and loops slower, but I can't figure out how to do this in a vector method:
I have a data frame (which for sentimental reasons I like to call my.data) which I want to do a full marginal analysis on. I need to remove certain elements one at a time and 'value' the data frame then I need to do the iterating again by removing only the next element. Then do again... and again... The idea is to do a full marginal analysis on a subset of my data. Anyhow, I can't conceive of how to do this in a vector efficient way.
I've shortened the looping part of the code down and it looks something like this:
for (j in my.data$item[my.data$fixed==0]) { # <-- selects the items I want to loop
# through
my.data.it <- my.data[my.data$item!= j,] # <-- this kicks item j out of the list
sum.data <-aggregate(my.data.it, by=list(year), FUN=sum, na.rm=TRUE) #<-- do an
# aggregation
do(a.little.dance) && make(a.little.love) -> get.down(tonight) # <-- a little
# song and dance
delta <- (get.love) # <-- get some love
delta.list<-append(delta.list, delta, after=length(delta.list)) #<-- put my love
# in a vector
}
So obviously I hacked out a bunch of stuff in the middle, just to make it less clumsy. The goal would be to remove the j loop using something more vector efficient. Any ideas?
Here's what seems like another very R-type way to generate the sums. Generate a vector that is as long as your input vector, containing nothing but the repeated sum of n elements. Then, subtract your original vector from the sums vector. The result: a vector (isums) where each entry is your original vector less the ith element.
> (my.data$item[my.data$fixed==0])
[1] 1 1 3 5 7
> sums <- rep(sum(my.data$item[my.data$fixed==0]),length(my.data$item[my.data$fixed==0]))
> sums
[1] 17 17 17 17 17
> isums <- sums - (my.data$item[my.data$fixed==0])
> isums
[1] 16 16 14 12 10
Strangely enough, learning to vectorize in R is what helped me get used to basic functional programming. A basic technique would be to define your operations inside the loop as a function:
data = ...;
items = ...;
leave_one_out = function(i) {
data1 = data[items != i];
delta = ...; # some operation on data1
return delta;
}
for (j in items) {
delta.list = cbind(delta.list, leave_one_out(j));
}
To vectorize, all you do is replace the for loop with the sapply mapping function:
delta.list = sapply(items, leave_one_out);
This is no answer, but I wonder if any insight lies in this direction:
> tapply((my.data$item[my.data$fixed==0])[-1], my.data$year[my.data$fixed==0][-1], sum)
tapply produces a table of statistics (sums, in this case; the third argument) grouped by the parameter given as the second argument. For example
2001 2003 2005 2007
1 3 5 7
The [-1] notation drops observation (row) one from the selected rows. So, you could loop and use [-i] on each loop
for (i in 1:length(my.data$item)) {
tapply((my.data$item[my.data$fixed==0])[-i], my.data$year[my.data$fixed==0][-i], sum)
}
keeping in mind that if you have any years with only 1 observation, then the tables returned by the successive tapply calls won't have the same number of columns. (i.e., if you drop out the only observation for 2001, then 2003, 2005, and 2007 would be te only columns returned).

Resources