I have a matrix that contains 3 columns and in total 10,000 elements. First and second columns are indexes and third column is the score. I want to normalize the score column based on this formula:
Normalized_score_i_j = score_i_j / ((sqrt(score_i_i) * (sqrt(score_j_j))
score_i_j = the current score itself
score_i_i = look at current score's index in first column, and in the dataset look for a score that has that index in both its first and second columns
score_j_j = look at current score's index in second column, and in the dataset look for a score that has that index in both its first and second columns
An example is for instance, if df is as follow:
df <- read.table(text = "
First.Protein,Second.Protein,Score
1,1,25
1,2,90
1,3,82
1,4,19
2,1,90
2,2,99
2,3,76
2,4,79
3,1,82
3,2,76
3,3,91
3,4,33
4,1,28
4,2,11
4,3,99
4,4,50
", header = TRUE, sep = ",")
If we are normalizing this row:
First.Protein Second.Protein Score
4 3 99
The normalized score will be:
The score itself divided by the sqrt of a score that its First.Protein and Second.Protein index are both 4 multiplied by the sqrt of a score where its First.Protein and Second.Protein indexes are both 3.
Therefore:
Normalized = 99 / (sqrt(50) * sqrt(91)) = 1.467674
I have the code below, but it is behaving very weirdly and is giving me values that are not at all normalized and are in fact very odd:
for(i in 1:nrow(Smith_Waterman_Scores))
{
Smith_Waterman_Scores$Score[i] <-
Smith_Waterman_Scores$Score[i] /
(sqrt(Smith_Waterman_Scores$Score[which(Smith_Waterman_Scores$First.Protein==Smith_Waterman_Scores$First.Protein[i] & Smith_Waterman_Scores$Second.Protein==Smith_Waterman_Scores$First.Protein[i])])) *
(sqrt(Smith_Waterman_Scores$Score[which(Smith_Waterman_Scores$First.Protein==Smith_Waterman_Scores$Second.Protein[i] & Smith_Waterman_Scores$Second.Protein==Smith_Waterman_Scores$Second.Protein[i])]))
}
Here's a re-write of your original attempt (which() is not necessary; just use the logical vector for sub-setting; with() allows you to refer to variables in the data frame without having to re-type the name of the data.frame -- easier to read but also easier to make a mistake)
orig0 <- function(df) {
for(i in 1:nrow(df)) {
df$Score[i] <- with(df, {
ii <- First.Protein == First.Protein[i] &
Second.Protein == First.Protein[i]
jj <- First.Protein == Second.Protein[i] &
Second.Protein == Second.Protein[i]
Score[i] / (sqrt(Score[ii]) * sqrt(Score[jj]))
})
}
df$Score
}
The problem is that Score[ii] and Score[jj] appear on the right-hand side both before and after they've been updated. Here's a revision where the original columns are interpreted as 'read-only'
orig1 <- function(df) {
normalized <- numeric(nrow(df)) # pre-allocate
for(i in 1:nrow(df)) {
normalized[i] <- with(df, {
ii <- First.Protein == First.Protein[i] &
Second.Protein == First.Protein[i]
jj <- First.Protein == Second.Protein[i] &
Second.Protein == Second.Protein[i]
Score[i] / (sqrt(Score[ii]) * sqrt(Score[jj]))
})
}
normalized
}
I think the results are now correct (see below). A better implementation would use sapply (or vapply) to avoid having to worry about the allocation of the return value
orig2 <- function(df) {
sapply(seq_len(nrow(df)), function(i) {
with(df, {
ii <- First.Protein == First.Protein[i] &
Second.Protein == First.Protein[i]
jj <- First.Protein == Second.Protein[i] &
Second.Protein == Second.Protein[i]
Score[i] / (sqrt(Score[ii]) * sqrt(Score[jj]))
})
})
}
Now that the results are correct, we can ask about performance. Your solution requires a scan of, e.g., First.Protein, each time through the loop. There are N=nrow(df) elements of First.Protein, and you're going through the loop N times, so you'll be making a multiple of N * N = N^2 comparisons -- if you increase the size of the data frame from 10 to 100 rows, the time taken will change from 10 * 10 = 100 units, to 100 * 100 = 10000 units of time.
Several of the answers attempt to avoid that polynomial scaling. My answer does this using match() on a vector of values; this probably scales as N (each look-up occurs in constant time, and there are N look-ups), which is much better than polynomial.
Create a subset of data with identical first and second proteins
ii = df[df$First.Protein == df$Second.Protein,]
Here's the ijth score from the original data frame
s_ij = df$Score
Look up First.Protein of df in ii and record the score; likewise for Second.Protein
s_ii = ii[match(df$First.Protein, ii$First.Protein), "Score"]
s_jj = ii[match(df$Second.Protein, ii$Second.Protein), "Score"]
The normalized scores are then
> s_ij / (sqrt(s_ii) * sqrt(s_jj))
[1] 1.0000000 1.8090681 1.7191871 0.5374012 1.8090681 1.0000000 0.8007101
[8] 1.1228571 1.7191871 0.8007101 1.0000000 0.4892245 0.7919596 0.1563472
[15] 1.4676736 1.0000000
This will be fast, using a single call to match() instead of many calls to which() inside a for loop or tests for identity inside an apply() -- both of the latter make N^2 comparisons and so scale very poorly.
I summarized some of the proposed solutions as
f0 <- function(df) {
contingency = xtabs(Score ~ ., df)
diagonals <- unname(diag(contingency))
i <- df$First.Protein
j <- df$Second.Protein
idx <- matrix(c(i, j), ncol=2)
contingency[idx] / (sqrt(diagonals[i]) * sqrt(diagonals[j]))
}
f1 <- function(df) {
ii = df[df$First.Protein == df$Second.Protein,]
s_ij = df$Score
s_ii = ii[match(df$First.Protein, ii$First.Protein), "Score"]
s_jj = ii[match(df$Second.Protein, ii$Second.Protein), "Score"]
s_ij / (sqrt(s_ii) * sqrt(s_jj))
}
f2 <- function(dt) {
dt.lookup <- dt[First.Protein == Second.Protein]
setkey(dt,"First.Protein" )
setkey(dt.lookup,"First.Protein" )
colnames(dt.lookup) <- c("First.Protein","Second.Protein","Score1")
dt <- dt[dt.lookup]
setkey(dt,"Second.Protein" )
setkey(dt.lookup,"Second.Protein")
colnames(dt.lookup) <- c("First.Protein","Second.Protein","Score2")
dt[dt.lookup][
, Normalized := Score / (sqrt(Score1) * sqrt(Score2))][
, .(First.Protein, Second.Protein, Normalized)]
}
f3 <- function(dt) {
eq = dt[First.Protein == Second.Protein]
dt[eq, Score_ii := i.Score, on = "First.Protein"]
dt[eq, Score_jj := i.Score, on = "Second.Protein"]
dt[, Normalised := Score/sqrt(Score_ii * Score_jj)]
dt[, c("Score_ii", "Score_jj") := NULL]
}
I know how to programmatically check that the first two generate consistent results; I don't know data.table well enough to get the normalized result out in the same order as the input columns for f2() so can't compare with the others (though they look correct 'by eye'). f3() produces numerically similar but not identical results
> identical(orig1(df), f0(df))
[1] TRUE
> identical(f0(df), f1(df))
[1] TRUE
> identical(f0(df), { f3(dt3); dt3[["Normalized"]] }) # pass by reference!
[1] FALSE
> all.equal(f0(df), { f3(dt3); dt3[["Normalized"]] })
[1] TRUE
There are performance differences
library(data.table)
dt2 <- as.data.table(df)
dt3 <- as.data.table(df)
library(microbenchmark)
microbenchmark(f0(df), f1(df), f2(dt2), f3(dt3))
with
> microbenchmark(f0(df), f1(df), f2(df), f3(df))
Unit: microseconds
expr min lq mean median uq max neval
f0(df) 967.117 992.8365 1059.7076 1030.9710 1094.247 2384.360 100
f1(df) 176.238 192.8610 210.4059 207.8865 219.687 333.260 100
f2(df) 4884.922 4947.6650 5156.0985 5017.1785 5142.498 6785.975 100
f3(df) 3281.185 3329.4440 3463.8073 3366.3825 3443.400 5144.430 100
The solutions f0 - f3 are likely to scale well (especially data.table) with real data; the fact that the times are in microseconds probably means that speed is not important (now that we are not implementing an N^2 algorithm).
On reflection, a more straight-forward impelementation of f1() just looks up the 'diagonal' elements
f1a <- function(df) {
ii = df[df$First.Protein == df$Second.Protein, ]
d = sqrt(ii$Score[order(ii$First.Protein)])
df$Score / (d[df$First.Protein] * d[df$Second.Protein])
}
You may be doing this in a very round-about manner. Can you see if this works for you:
R> xx
First Second Score
1 1 1 25
2 1 2 90
3 1 3 82
4 1 4 19
5 2 1 90
6 2 2 99
7 2 3 76
8 2 4 79
9 3 1 82
10 3 2 76
11 3 3 91
12 3 4 33
13 4 1 28
14 4 2 11
15 4 3 99
16 4 4 50
R> contingency = xtabs(Score ~ ., data=xx)
R> contingency
Second
First 1 2 3 4
1 25 90 82 19
2 90 99 76 79
3 82 76 91 33
4 28 11 99 50
R> diagonals <- unname(diag(contingency))
R> diagonals
[1] 25 99 91 50
R> normalize <- function (i, j, contingencies, diagonals) {
+ contingencies[i, j] / (sqrt(diagonals[i]) * sqrt(diagonals[j]))
+ }
R> normalize(4, 3, contingency, diagonals)
[1] 1.467674
Here's how I'd approach using data.table. Hopefully #MartinMorgan finds this easier to understand :-).
require(data.table) # v1.9.6+
dt = as.data.table(df) # or use setDT(df) to convert by reference
eq = dt[First.Protein == Second.Protein]
So far, I've just created a new data.table eq which contains all rows where both columns are equal.
dt[eq, Score_ii := i.Score, on = "First.Protein"]
dt[eq, Score_jj := i.Score, on = "Second.Protein"]
Here we add columns Score_ii and Score_jj while joining on columns First.Protein and Second.Protein. That it is a join operation should be clear because of on= argument. The i. refers to the Score column in the data.table provided in the i-argument (here, eq's Score).
Note that we can use match() here as well. But that wouldn't work if you've to lookup directly (and as efficiently) based on more than one column. Using on=, we can extend this quite easily, and is also much easier to read/understand.
Once we've all the required columns, the task is just to get the final Normalised column (and delete the intermediates if they're not necessary).
dt[, Normalised := Score/sqrt(Score_ii * Score_jj)]
dt[, c("Score_ii", "Score_jj") := NULL] # delete if you don't want them
I'll leave out the micro- and milli- second benchmarks as I'm not interested in them.
PS: The columns Score_ii and Score_jj are added above on purpose under the assumption that you might need them. If you don't want them at all, you can also do:
Score_ii = eq[dt, Score, on = "First.Protein"] ## -- (1)
Score_jj = eq[dt, Score, on = "Second.Protein"]
(1) reads: for each row in dt get matching row in eq while matching on column First.Protein and extract eq$Score corresponding to that matching row.
Then, we can directly add the Normalised column as:
dt[, Normalised := Score / sqrt(Score_ii * Score_jj)]
You can can implement this with joins, here is an example using data.table:
library(data.table)
dt <- data.table(df)
dt.lookup <- dt[First.Protein == Second.Protein]
setkey(dt,"First.Protein" )
setkey(dt.lookup,"First.Protein" )
colnames(dt.lookup) <- c("First.Protein","Second.Protein","Score1")
dt <- dt[dt.lookup]
setkey(dt,"Second.Protein" )
setkey(dt.lookup,"Second.Protein")
colnames(dt.lookup) <- c("First.Protein","Second.Protein","Score2")
dt <- dt[dt.lookup][
, Normalized := Score / (sqrt(Score1) * sqrt(Score2))][
, .(First.Protein, Second.Protein, Normalized)]
Just make sure you don't use for loops.
Loop through rows using apply:
#compute
df$ScoreNorm <-
apply(df, 1, function(i){
i[3] /
(
sqrt(df[ df$First.Protein == i[1] &
df$Second.Protein == i[1], "Score"]) *
sqrt(df[ df$First.Protein == i[2] &
df$Second.Protein == i[2], "Score"])
)
})
#test output
df[15, ]
# First.Protein Second.Protein Score ScoreNorm
# 15 4 3 99 1.467674
Related
I just saw a YouTube video from Numberphile on the Yellowstone sequence (A098550). It's base on a sequence starting with 1 and 2, with subsequent terms generated by the rules:
no repeated terms
always pick the lowest integer
gcd(a_n, a_(n-1)) = 1
gcd(a_n, a_(n-2)) > 1
The first 15 terms would be: 1 2 3 4 9 8 15 14 5 6 25 12 35 16 7
A Q&D approach in R could be something like this, but understandably, this becomes very slow at attempts to make longer sequences. It also make some assumptions about the highest number that is possible within the sequence (as info: the sequence of 10,000 items never goes higher than 5000).
What can we do to make this faster?
library(DescTools)
a <- c(1, 2, 3)
p <- length(a)
# all natural numbers
all_ints <- 1:5000
for (n in p:1000) {
# rule 1 - remove all number that are in sequence already
next_a_set <- all_ints[which(!all_ints %in% a)]
# rule 3 - search the remaining set for numbers that have gcd == 1
next_a_option <- next_a_set[which(
sapply(
next_a_set,
function(x) GCD(a[n], x)
) == 1
)]
# rule 4 - search the remaining number for gcd > 1
next_a <- next_a_option[which(
sapply(
next_a_option,
function(x) GCD(a[n - 1], x)
) > 1
)]
# select the lowest
a <- c(a, min(next_a))
n <- n + 1
}
Here's a version that's about 20 times faster than yours, with comments about the changes:
# Set a to the final length from the start.
a <- c(1, 2, 3, rep(NA, 997))
p <- 3
# Define a vectorized gcd() function. We'll be testing
# lots of gcds at once. This uses the Euclidean algorithm.
gcd <- function(x, y) { # vectorized gcd
while (any(y != 0)) {
x1 <- ifelse(y == 0, x, y)
y <- ifelse(y == 0, 0, x %% y)
x <- x1
}
x
}
# Guess at a reasonably large vector to work from,
# but we'll grow it later if not big enough.
allnum <- 1:1000
# Keep a logical record of what has been used
used <- c(rep(TRUE, 3), rep(FALSE, length(allnum) - 3))
for (n in p:1000) {
# rule 1 - remove all number that are in sequence already
# nothing to do -- used already records that.
repeat {
# rule 3 - search the remaining set for numbers that have gcd == 1
keep <- !used & gcd(a[n], allnum) == 1
# rule 4 - search the remaining number for gcd > 1
keep <- keep & gcd(a[n-1], allnum) > 1
# If we found anything, break out of this loop
if (any(keep))
break
# Otherwise, make the set of possible values twice as big,
# and try again
allnum <- seq_len(2*length(allnum))
used <- c(used, rep(FALSE, length(used)))
}
# select the lowest
newval <- which.max(keep)
# Assign into the appropriate place
a[n+1] <- newval
# Record that it has been used
used[newval] <- TRUE
}
If you profile it, you'll see it spends most of its time in the gcd() function. You could probably make that a lot faster by redoing it in C or C++.
The biggest change here is pre-allocation and restricting the search to numbers that have not yet been used.
library(numbers)
N <- 5e3
a <- integer(N)
a[1:3] <- 1:3
b <- logical(N) # which numbers have been used already?
b[1:3] <- TRUE
NN <- 1:N
system.time({
for (n in 4:N) {
a1 <- a[n - 1L]
a2 <- a[n - 2L]
for (k in NN[!b]) {
if (GCD(k, a1) == 1L & GCD(k, a2) > 1L) {
a[n] <- k
b[k] <- TRUE
break
}
}
if (!a[n]) {
a <- a[1:(n - 1L)]
break
}
}
})
#> user system elapsed
#> 1.28 0.00 1.28
length(a)
#> [1] 1137
For a fast C++ algorithm, see here.
I am looking for an RAM efficient way to calculate the median over a complement set with the help of data.table.
For a set of observations from different groups, I am interested in an implementation of a median of "other groups". I.e., if a have a data.table with one value column and one grouping column, I want for each group calculate the median of values in all other group except the current group. E.g. for group 1 we calculate the median from all values except the values that belong to group 1, and so on.
A concrete example data.table
dt <- data.table(value = c(1,2,3,4,5), groupId = c(1,1,2,2,2))
dt
# value groupId
# 1: 1 1
# 2: 2 1
# 3: 3 2
# 4: 4 2
# 5: 5 2
I would like the medianOfAllTheOtherGroups to be defined as 1.5 for group 2
and defined as 4 for group 1, repeated for each entry in the same data.table:
dt <- data.table(value = c(1,2,3,4,5), groupId = c(1,1,2,2,2), medianOfAllTheOtherGroups = c(4, 4, 1.5, 1.5, 1.5))
dt
# value groupId medianOfAllTheOtherGroups
# 1: 1 1 4.0 # median of all groups _except_ 1
# 2: 2 1 4.0
# 3: 3 2 1.5 # median of all groups _except_ 2
# 4: 4 2 1.5
# 5: 5 2 1.5
To calculate the median for each group only once and not for each observation, we went for an implementation with a loop.
The current complete implementation works nice for small data.tables as input, but
suffers from large RAM consumption for larger data sets a lot with the medians called in a loop as bottleneck (Note: for the real use case we have a dt with 3.000.000 rows and 100.000 groups).
I have worked very little with improving RAM consumption. Can an expert help here to improve RAM for the minimal example that I provide below?
MINIMAL EXAMPLE:
library(data.table)
set.seed(1)
numberOfGroups <- 10
numberOfValuesPerGroup <- 100
# Data table with column
# groupIds - Ids for the groups available
# value - value we want to calculate the median over
# includeOnly - boolean that indicates which example should get a "group specific" median
dt <-
data.table(
groupId = as.character(rep(1:numberOfGroups, each = numberOfValuesPerGroup)),
value = round(runif(n = numberOfGroups * numberOfValuesPerGroup), 4)
)
# calculate the median from all observations for those groups that do not
# require a separate treatment
medianOfAllGroups <- median(dt$value)
dt$medianOfAllTheOtherGroups <- medianOfAllGroups
# generate extra data.table to collect results for selected groups
includedGroups <- dt[, unique(groupId)]
dt_otherGroups <-
data.table(groupId = includedGroups,
medianOfAllTheOtherGroups = as.numeric(NA)
)
# loop over all selected groups and calculate the median from all observations
# except of those that belong to this group
for (id in includedGroups){
dt_otherGroups[groupId == id,
medianOfAllTheOtherGroups := median(dt[groupId != id, value])]
}
# merge subset data to overall data.table
dt[dt_otherGroups, medianOfAllTheOtherGroups := i.medianOfAllTheOtherGroups,
on = c("groupId")]
PS: here the example output for 10 groups with 100 observations each:
dt
# groupId value medianOfAllTheOtherGroups
# 1: 1 0.2655 0.48325
# 2: 1 0.3721 0.48325
# 3: 1 0.5729 0.48325
# 4: 1 0.9082 0.48325
# 5: 1 0.2017 0.48325
# ---
# 996: 10 0.7768 0.48590
# 997: 10 0.6359 0.48590
# 998: 10 0.2821 0.48590
# 999: 10 0.1913 0.48590
# 1000: 10 0.2655 0.48590
Some numbers for different settings of the minimal example (tested on a Mac Book Pro with 16Gb RAM):
NumberOfGroups
numberOfValuesPerGroup
Memory (GB)
Runtime (s)
500
50
0.48
1.47
5000
50
39.00
58.00
50
5000
0.42
0.65
All memory values were extracted from the output of profvis, see example screenshot for the smallest example here:
How about this approach:
setkey(dt, groupId)
dt[, median_val := median(dt$value[dt$groupId != groupId]), by = .(groupId)]
For the 5000 groups with 50 values each case this took ~34 seconds on my MBP. Haven't checked RAM usage though.
Edit: here's another version with two changes, (1) using collapse::fmedian as suggested by Henrik and (2) pre-aggregating the values into a list column by group.
d2 = dt[, .(value = list(value)), keyby = .(groupId)]
setkey(dt, groupId)
dt[, median_val :=
fmedian(d2[-.GRP, unlist(value, use.names = FALSE, recursive = FALSE)]),
by = .(groupId)]
This took around 18 seconds for the 5000/50 example on my machine.
RAM usage: approach 1 ~28GB approach 2 ~15GB according to profvis
Disclaimer: For some reason the profiling keeps crashing my session, so unfortunately I have no such results. However, because my alternatives were a bit faster than OP, I thought it could still be worth posting them so that OP may assess their memory use.
Data
# numberOfGroups <- 5000
# numberOfValuesPerGroup <- 50
# dt <- ...as in OP...
d1 = copy(dt)
d1[ , ri := .I] # just to able to restore original order when comparing result with OP
d2 = copy(dt)
d3 = copy(dt)
Explanation
I shamelessly borrow lines 28, 30-32 from median.default to make a stripped-down version of median.
Calculate total number of rows in the original data (nrow(d1)). Order data by 'value' (setorder). By ordering, two instances of sort in the median code can be removed.
For each 'groupID' (by = groupId):
Calculate length of "other" (number of rows in the original data minus number of rows of current group (.N)).
Calculate median, where the input values are d1$value[-.I], i.e. the original values except the indices of the current group; ?.I: "While grouping, it holds for each item in the group, its row location in x".
Code & Timing
system.time({
# number of rows in original data
nr = nrow(d1)
# order by value
setorder(d1, value)
d1[ , med := {
# length of "other"
n = nr - .N
# ripped from median
half = (n + 1L) %/% 2L
if (n %% 2L == 1L) d1$value[-.I][half]
else mean(d1$value[-.I][half + 0L:1L])
}, by = groupId]
})
# user system elapsed
# 4.08 0.01 4.07
# OP's code on my (old) PC
# user system elapsed
# 84.02 7.26 86.75
# restore original order & check equality
setorder(d1, ri)
all.equal(dt$medianOfAllTheOtherGroups, d1$med)
# [1] TRUE
Comparison with base::median & collapse::fmedian
I also tried the "-.I" with base::median and collapse::fmedian, where the latter was about twice as fast as base::median.
system.time(
d2[ , med := median(d2$value[-.I]), by = groupId]
)
# user system elapsed
# 26.86 0.02 26.85
library(collapse)
system.time(
d3[ , med := fmedian(d3$value[-.I]), by = groupId]
)
# user system elapsed
# 16.95 0.00 16.96
all.equal(dt$medianOfAllTheOtherGroups, d2$med)
# TRUE
all.equal(dt$medianOfAllTheOtherGroups, d3$med)
# TRUE
Thanks a lot to #Cole for helpful comments which improved the performance.
The median is the midpoint of a dataset that's been ordered. For an odd number of values in a dataset, the median is simply the middle number. For an even number of values in a dataset, the median is the mean of the two numbers closest to the middle.
To demonstrate, consider the simple vector of 1:8
1 | 2 | 3 |** 4 | 5 **| 6 | 7 | 8
In this case, our midpoint is 4.5. And because this is a very simple example, the median itself is 4.5
Now consider groupings where one grouping is the first value of the vector. That is, our group is only 1. We know that this will shift our median towards the right (i.e. larger) because we removed a low value of the distribution. Our new distribution is 2:8 and the median is now 5.
2 | 3 | 4 | *5* | 6 | 7 | 8
This is only interesting if we can determine a relationship between these shifts. Specifically, our original midpoint was 4.5. Our new midpoint based on the original vector is 5.
Let's demonstrate a larger mixture with a group of 1, 3, and 7. In this case, we have 2 values below the original midpoint and one value above the original midpoint. Our new median is 5:
2 | 4 | ** 5 ** | 6 | 8
So empirically, we have determined that shifting removing smaller numbers from the distribution shifts our mid_point index by 0.5 and removing larger numbers from the distribution shifts our mid_point index by -0.5. There are a few other stipulations:
We need to make sure that our grouping index is not in the new mid_point calculation. Consider a group of 1, 2, and 5. Based on my math, we would shift up by 0.5 based on (2 below - 1 above) / 2 for a new mid_point of 5. That's wrong because 5 was already used up! We need to account for this.
3 | 4 | ** 6 ** | 7 | 8
Likewise, with our shifted mid_point, we also need to look back to verify that our ranking values are still aligned. In a sequence of 1:20, consider a group of c(1:9, 11). While 11 is originally above the original mid_point of 10.5, it is not above the shifted mid_point of (9 below - 1 above ) / 2 14.5. But our actual median would be 15.5 because 11 is now below the new mid_way point.
10 | 12 | 13 | 14 | ** 15 | 16 **| 17 | 18 | 19 | 20
TL:DR what's the code??
All of the examples above, the grouping's rankings vector are given in data.table by the special symbol I assuming we did setorder(). If we do the same math as above, we don't have to waste time subsetting the dataset. We can instead determine what the new index(es) should be based on what's been removed from the distribution.
setorder(dt, value)
nr = nrow(dt)
is_even = nr %% 2L == 0L
mid_point = (nr + 1L) / 2L
dt[, medianOfAllTheOtherGroups :=
{
below = sum(.I < mid_point)
is_midpoint = is_even && below && (.I[below] + 1L == mid_point)
above = .N - below - is_midpoint
new_midpoint = (below - above) / 2L + mid_point
## TODO turn this into a loop incase there are multiple values that this is true
if (new_midpoint > mid_point && above &&.I[below + 1] < new_midpoint) { ## check to make sure that none of the indices were above
below = below - 1L
new_midpoint = new_midpoint + 1L
} else if (new_midpoint < mid_point && below && .I[below] > new_midpoint) {
below = below + 1L
new_midpoint = new_midpoint - 1L
}
if (((nr - .N + 1L) %% 2L) == 0L) {
dt$value[new_midpoint]
} else {
##TODO turn this into a loop in case there are multiple values that this is true for.
default_inds = as.integer(new_midpoint + c(-0.5, 0.5))
if (below) {
if (.I[below] == default_inds[1L])
default_inds[1L] = .I[below] - 1L
}
if (above) {
if (.I[below + 1L + is_midpoint] == default_inds[2L])
default_inds[2L] = .I[below + 1L] + 1L
}
mean(dt$value[default_inds])
}
}
, by = groupId]
Performance
This is using bench::mark which checks that all results are equal. FOr Henrik and my solutions, I do reorder the results back to the original grouping so that they are all equal.
Note that while this (complicated) algorithm is most efficient, I do want to emphasize that most of these likely do not extreme peak RAM usage. The other answers have to subset 5,000 times to allocate a vector of length 249,950 to calculate a new median. That's about 2 MB per loop just on allocation (e.g., 10 GB overall).
# A tibble: 6 x 13
expression min median `itr/sec` mem_alloc `gc/sec` n_itr n_gc total_time result memory time gc
<bch:expr> <bch:tm> <bch:tm> <dbl> <bch:byt> <dbl> <int> <dbl> <bch:tm> <list> <list> <list> <list>
1 cole 225.7ms 271.8ms 3.68 6.34MB
2 henrik_smart_med 17.7s 17.7s 0.0564 23.29GB
3 henrik_base_med 1.6m 1.6m 0.0104 41.91GB
4 henrik_fmed 55.9s 55.9s 0.0179 32.61GB
5 christian_lookup 54.7s 54.7s 0.0183 51.39GB
6 talat_unlist 35.9s 35.9s 0.0279 19.02GB
Full profile code
library(data.table)
library(collapse)
set.seed(76)
numberOfGroups <- 5000
numberOfValuesPerGroup <- 50
dt <-
data.table(
groupId = (rep(1:numberOfGroups, each = numberOfValuesPerGroup)),
value = round(runif(n = numberOfGroups * numberOfValuesPerGroup, 0, 10), 4)
)
## this is largely instantaneous.
dt[ , ri := .I]
bench::mark( cole = {
setorder(dt, value)
nr = nrow(dt)
is_even = nr %% 2L == 0L
mid_point = (nr + 1L) / 2L
dt[, medianOfAllTheOtherGroups :=
{
below = sum(.I < mid_point)
is_midpoint = is_even && below && (.I[below] + 1L == mid_point)
above = .N - below - is_midpoint
new_midpoint = (below - above) / 2L + mid_point
## TODO turn this into a loop incase there are multiple values that this is true
if (new_midpoint > mid_point && above &&.I[below + 1] < new_midpoint) { ## check to make sure that none of the indices were above
below = below - 1L
new_midpoint = new_midpoint + 1L
} else if (new_midpoint < mid_point && below && .I[below] > new_midpoint) {
below = below + 1L
new_midpoint = new_midpoint - 1L
}
if (((nr - .N + 1L) %% 2L) == 0L) {
as.numeric(dt$value[new_midpoint])
} else {
##TODO turn this into a loop in case there are multiple values that this is true for.
default_inds = as.integer(new_midpoint + c(-0.5, 0.5))
if (below) {
if (.I[below] == default_inds[1L])
default_inds[1L] = .I[below] - 1L
}
if (above) {
if (.I[below + 1L + is_midpoint] == default_inds[2L])
default_inds[2L] = .I[below + 1L] + 1L
}
mean(dt$value[default_inds])
}
}
, by = groupId]
setorder(dt, ri)
},
henrik_smart_med = {
# number of rows in original data
nr = nrow(dt)
# order by value
setorder(dt, value)
dt[ , medianOfAllTheOtherGroups := {
# length of "other"
n = nr - .N
# ripped from median
half = (n + 1L) %/% 2L
if (n %% 2L == 1L) dt$value[-.I][half]
else mean(dt$value[-.I][half + 0L:1L])
}, by = groupId]
setorder(dt, ri)
},
henrik_base_med = {
dt[ , med := median(dt$value[-.I]), by = groupId]
},
henrik_fmed = {
dt[ , med := fmedian(dt$value[-.I]), by = groupId]
},
christian_lookup = {
nrows <- dt[, .N]
dt_match <- dt[, .(nrows_other = nrows- .N), by = .(groupId_match = groupId)]
dt_match[, odd := nrows_other %% 2]
dt_match[, idx1 := ceiling(nrows_other/2)]
dt_match[, idx2 := ifelse(odd, idx1, idx1+1)]
setkey(dt, value)
dt_match[, medianOfAllTheOtherGroups := dt[groupId != groupId_match][c(idx1, idx2), sum(value)/2], by = groupId_match]
dt[dt_match, medianOfAllTheOtherGroups := i.medianOfAllTheOtherGroups,
on = c(groupId = "groupId_match")]
},
talat_unlist = {
d2 = dt[, .(value = list(value)), keyby = .(groupId)]
setkey(dt, groupId)
dt[, medianOfAllTheOtherGroups :=
fmedian(d2[-.GRP, unlist(value, use.names = FALSE, recursive = FALSE)]),
by = .(groupId)]
})
Approach for exact results:
Median is "the middle" value of a sorted vector. (or mean of two middle values for even length vector)
If we know the length of the sorted vector of others, we can directly look up the corresponding vector element(s) index for the median thus avoiding actually computing the median n*groupId times:
library(data.table)
set.seed(1)
numberOfGroups <- 5000
numberOfValuesPerGroup <- 50
dt <-
data.table(
groupId = as.character(rep(1:numberOfGroups, each = numberOfValuesPerGroup)),
value = round(runif(n = numberOfGroups * numberOfValuesPerGroup), 4)
)
# group count match table + idx position for median of others
nrows <- dt[, .N]
dt_match <- dt[, .(nrows_other = nrows- .N), by = .(groupId_match = groupId)]
dt_match[, odd := nrows_other %% 2]
dt_match[, idx1 := ceiling(nrows_other/2)]
dt_match[, idx2 := ifelse(odd, idx1, idx1+1)]
setkey(dt, value)
dt_match[, medianOfAllTheOtherGroups := dt[groupId != groupId_match][c(idx1, idx2), sum(value)/2], by = groupId_match]
dt[dt_match, medianOfAllTheOtherGroups := i.medianOfAllTheOtherGroups,
on = c(groupId = "groupId_match")]
There might be more data.table-ish ways improving performance further, I guess.
Memory/runtime for numberOfGroups = 5000 and numberOfValuesPerGroup = 50: 20GB, 27000ms
I am trying to pass all columns from a data.frame matching a criteria to a function within the summarize function of dplyr as follows:
df %>% group_by(Version, Type) %>%
summarize(mcll(TrueClass, starts_with("pred")))
Error: argument is of length zero
Is there a way to do this? A working example follows:
Build a simulated data.frame of sample predictions. These are interpreted as the output of a classification algorithm.
library(dplyr)
nrow <- 40
ncol <- 4
set.seed(567879)
getProbs <- function(i) {
p <- runif(i)
return(p / sum(p))
}
df <- data.frame(matrix(NA, nrow, ncol))
for (i in seq(nrow)) df[i, ] <- getProbs(ncol)
names(df) <- paste0("pred.", seq(ncol))
add a column indicating the true class
df$TrueClass <- factor(ceiling(runif(nrow, min = 0, max = ncol)))
add categorical columns for sub-setting
df$Type <- c(rep("a", nrow / 2), rep("b", nrow / 2))
df$Version <- rep(1:4, times = nrow / 4)
now I want to calculate the Multiclass LogLoss for these predictions using the function below:
mcll <- function (act, pred)
{
if (class(act) != "factor") {
stop("act must be a factor")
}
pred[pred == 0] <- 1e-15
pred[pred == 1] <- 1 - 1e-15
dummies <- model.matrix(~act - 1)
if (nrow(dummies) != nrow(pred)) {
return(0)
}
return(-1 * (sum(dummies * log(pred)))/length(act))
}
this is easily done with the entire data set
act <- df$TrueClass
pred <- df %>% select(starts_with("pred"))
mcll(act, pred)
but I want to use dplyr group_by to calculate mcll for each subset of the data
df %>% group_by(Version, Type) %>%
summarize(mcll(TrueClass, starts_with("pred")))
Ideally I could do this without changing the mcll() function, but I am open to doing that if it simplifies the other code.
Thanks!
EDIT: Note that the input to mcll is a vector of true values and a matrix of probabilities with one column for each "pred" column. For each subset of data, mcll should return a scalar. I can get exactly what I want with the code below, but I was hoping for something within the context of dplyr.
mcll_df <- data.frame(matrix(ncol = 3, nrow = 8))
names(mcll_df) <- c("Type", "Version", "mcll")
count = 1
for (ver in unique(df$Version)) {
for (type in unique(df$Type)) {
subdat <- df %>% filter(Type == type & Version == ver)
val <- mcll(subdat$TrueClass, subdat %>% select(starts_with("pred")))
mcll_df[count, ] <- c(Type = type, Version = ver, mcll = val)
count = count + 1
}
}
head(mcll_df)
Type Version mcll
1 a 1 1.42972507510096
2 b 1 1.97189000832723
3 a 2 1.97988830406062
4 b 2 1.21387875938737
5 a 3 1.30629638026735
6 b 3 1.48799237895462
This is easy to do using data.table:
library(data.table)
setDT(df)[, mcll(TrueClass, .SD), by = .(Version, Type), .SDcols = grep("^pred", names(df))]
# Version Type V1
#1: 1 a 1.429725
#2: 2 a 1.979888
#3: 3 a 1.306296
#4: 4 a 1.668330
#5: 1 b 1.971890
#6: 2 b 1.213879
#7: 3 b 1.487992
#8: 4 b 1.171286
I had to change the mcll function a little bit but then it worked. The problem is occurring with the second if statement. You are telling the function to get nrow(pred), but if you are summarizing over multiple columns you are actually only supplying a vector each time (because each column gets analyzed separately). Additionally, I switched the order of the arguments being entered into the function.
mcll <- function (pred, act)
{
if (class(act) != "factor") {
stop("act must be a factor")
}
pred[pred == 0] <- 1e-15
pred[pred == 1] <- 1 - 1e-15
dummies <- model.matrix(~act - 1)
if (nrow(dummies) != length(pred)) { # the main change is here
return(0)
}
return(-1 * (sum(dummies * log(pred)))/length(act))
}
From there we can use the summarise_each function.
df %>% group_by(Version,Type) %>% summarise_each(funs(mcll(., TrueClass)), matches("pred"))
Version Type pred.1 pred.2 pred.3 pred.4
(int) (chr) (dbl) (dbl) (dbl) (dbl)
1 1 a 1.475232 1.972779 1.743491 1.161984
2 1 b 2.030829 1.331629 1.397577 1.484865
3 2 a 1.589256 1.740858 1.898906 2.005511
I checked this against a subset of the data and it looks like it works.
mcll(df$pred.1[which(df$Type=="a" & df$Version==1)],
df$TrueClass[which(df$Type=="a" & df$Version==1)])
[1] 1.475232 #pred.1 mcll when Version equals 1 and Type equals a.
I have a little problem in a function.
The aim of it is to remove outliers I've detected in my data.frame. They are detected when there's a too big difference with the previous correct value (e.g c(1,2,3,20,30,4,5,6): "20" and "30" are the outliers). But my data is much more complex than this.
My idea is to consider the first two numeric values of my column as "correct". Then, I want to test each next value:
if the difference between the tested value and the previous one is <20, then it's a new correct one, and the test must start again from this new correct value (and not from the previous correct one)
if the same difference is >20, then it's a wrong one. An index must be put next to the wrong value, and the test must still continue from this same correct value, until a new correct value is detected
Here's an example with my function and a fake DF:
myts <- data.frame(x=c(12,12,35,39,46,45,33,5,26,28,29,34,15,15),z=NA)
test <- function(x){
st1 = NULL
temp <- st1[1] <- x[1]
st1 <- numeric(length(x))
for (i in 2:(length(x))){
if((!is.na(x[i])) & (!is.na(x[i-1]))& (abs((x[i])-(temp)) > 20)){
st1[i] <- 1
} }
return(st1)
}
myts[,2] <- apply(as.data.frame(myts[,1]),2,test)
myts[,2] <- as.numeric(myts[,2])
It does nearly the job, but the problem is that the last correct value is not memorized. It still does the test from the first correct value.
Due to this, rows 9 to 11 in my example are not detected. I let you imagine the problem on a 500,000 rows data.frame.
How can I solve this little problem? The rest of the function may be OK.
You just need to update temp for any indices that aren't outliers:
test <- function(x) {
temp <- x[1]
st1 <- numeric(length(x))
for (i in 2:(length(x))){
if(!is.na(x[i]) & !is.na(x[i-1]) & abs(x[i]-temp) > 20) {
st1[i] <- 1
} else {
temp <- x[i]
}
}
return(st1)
}
myts[,2] <- apply(as.data.frame(myts[,1]),2,test)
myts[,2] <- as.numeric(myts[,2])
myts
# x z
# 1 12 0
# 2 12 0
# 3 35 1
# 4 39 1
# 5 46 1
# 6 45 1
# 7 33 1
# 8 5 0
# 9 26 1
# 10 28 1
# 11 29 1
# 12 34 1
# 13 15 0
# 14 15 0
One thing to note is that for loops in R will be quite slow compared to vectorized functions. However, because each element in your vector depends on a complicated way on the previous ones, it's tough to use R's built-in vectorized functions to efficiently compute your vector. You can convert this code nearly verbatim to C++ and use the Rcpp package to regain the efficiency:
library(Rcpp)
test2 <- cppFunction(
"IntegerVector test2(NumericVector x) {
const int n = x.length();
IntegerVector st1(n, 0);
double temp = x[0];
for (int i=1; i < n; ++i) {
if (!R_IsNA(x[i]) && !R_IsNA(x[i]) && fabs(x[i] - temp) > 20.0) {
st1[i] = 1;
} else {
temp = x[i];
}
}
return st1;
}")
all.equal(test(myts[,1]), test2(myts[,1]))
# [1] TRUE
# Benchmark on large vector with some NA values:
set.seed(144)
large.vec <- c(0, sample(c(1:50, NA), 1000000, replace=T))
all.equal(test(large.vec), test2(large.vec))
# [1] TRUE
library(microbenchmark)
microbenchmark(test(large.vec), test2(large.vec))
# Unit: milliseconds
# expr min lq mean median uq max neval
# test(large.vec) 2343.684164 2468.873079 2667.67970 2604.22954 2747.23919 3753.54901 100
# test2(large.vec) 9.596752 9.864069 10.97127 10.23011 11.68708 16.67855 100
The Rcpp code is about 250x faster on a vector of length 1 million. Depending on your use case this speedup may or may not be important.
As this is my first time asking a question on SO, I apologize in advance for any improper formatting.
I am very new to R and am trying to create a function that will return the row value of a data frame column once a running total in another column has met or exceeded a given value (the row that the running sum begins in is also an argument).
For example, given the following data frame, if given a starting parameter of x=3 and stop parameter of y=17, the function should return 5 (the x value of the row where the sum of y >= 17).
X Y
1 5
2 10
3 5
4 10
5 5
6 10
7 5
8 10
The function as I've currently written it returns the correct answer, but I have to believe there is a much more 'R-ish' way to accomplish this, instead of using loops and incrementing temporary variables, and would like to learn the right way, rather than form bad habits that I will have to correct later.
A very simplified version of the function:
myFunction<-function(DataFrame,StartRow,Total){
df<-DataFrame[DataFrame[[1]] >= StartRow,]
i<-0
j<-0
while (j < Total) {
i<-i+1
j<-sum(df[[2]][1:i])
}
x<-df[[1]][i]
return(x)
}
All the solutions posted so far compute the cumulative sum of the entire Y variable, which can be inefficient in cases where the data frame is really large but the index is near the beginning. In this case, a solution with Rcpp could be more efficient:
library(Rcpp)
get_min_cum2 = cppFunction("
int gmc2(NumericVector X, NumericVector Y, int start, int total) {
double running = 0.0;
for (int idx=0; idx < Y.size(); ++idx) {
if (X[idx] >= start) {
running += Y[idx];
if (running >= total) {
return X[idx];
}
}
}
return -1; // Running total never exceeds limit
}")
Comparison with microbenchmark:
get_min_cum <-
function(start,total)
with(dat[dat$X>=start,],X[min(which(cumsum(Y)>total))])
get_min_dt <- function(start, total)
dt[X >= start, X[cumsum(Y) >= total][1]]
set.seed(144)
dat = data.frame(X=1:1000000, Y=abs(rnorm(1000000)))
dt = data.table(dat)
get_min_cum(3, 17)
# [1] 29
get_min_dt(3, 17)
# [1] 29
get_min_cum2(dat$X, dat$Y, 3, 17)
# [1] 29
library(microbenchmark)
microbenchmark(get_min_cum(3, 17), get_min_dt(3, 17),
get_min_cum2(dat$X, dat$Y, 3, 17))
# Unit: milliseconds
# expr min lq median uq max neval
# get_min_cum(3, 17) 125.324976 170.052885 180.72279 193.986953 418.9554 100
# get_min_dt(3, 17) 100.990098 149.593250 162.24523 176.661079 399.7531 100
# get_min_cum2(dat$X, dat$Y, 3, 17) 1.157059 1.646184 2.30323 4.628371 256.2487 100
In this case, it's about 100x faster to use the Rcpp solution than other approaches.
Try this for example, I am using cumsum and vectorized logical subsetting:
get_min_cum <-
function(start,total)
with(dat[dat$X>=start,],X[min(which(cumsum(Y)>total))])
get_min_cum(3,17)
5
Here you go (using data.table because of ease of syntax):
library(data.table)
dt = data.table(df)
dt[X >= 3, X[cumsum(Y) >= 17][1]]
#[1] 5
Well, here's one way:
i <- 3
j <- 17
min(df[i:nrow(df),]$X[cumsum(df$Y[i:nrow(df)])>j])
# [1] 5
This takes df$X for rows i:nrow(df) and indexes that based on cumsum(df$Y) > j, starting also at row i. This returns all df$X for which the cumsum > j. min(...) then returns the smallest value.
with(df, which( cumsum( (x>=3)*y) >= 17)[1] )