Inconsistent dist() foreach results - r

I have data that is roughly in the following format but is very large but is broken up by groups using the class and uniqueId variable. Where each location is a pair row wise (x, y).
df <-
data.frame(
x = c(1, 2, 3, 4, 5, 6, 8, 9, 10),
y = c(1, 2, 3, 4, 5, 6, 8, 9, 10),
class = c(0, 0, 0, 0, 0, 1, 0, 1, 0),
uniqueId = c("1-2-3", "1-2-3", "1-2-3", "1-2-4", "1-2-4", "1-2-4", "1-3-2", "1-3-2", "1-3-2"),
partialId = c("1.2", "1.2", "1.2", "1.2", "1.2", "1.2", "1.3", "1.3", 1.3")
)
The function I am using should go through the dataframe and calculate the smallest distance to another object within the same uniqueId but different class as the current row. To do this I've broken my data into chunks the following way.
indexes <-
df %>%
select(partialId) %>%
unique()
j <- 1
library(doParallel)
class_separation <- c()
cl <- makePSOCKcluster(24)
registerDoParallel(cl)
while(j <= nrow(indexes)) {
test <- df %>% filter(partialId == indexes$partialId[j])
n <- nrow(test)
vec <- numeric(n)
vec <- foreach(k = 1:n, .combine = 'c', .multicombine = F) %dopar% {
c(
min(
apply(
test[test$uniqueId == test$uniqueId[k] & test$class != test$class[k], c("x","y")],
1,
function(x) dist(rbind(c(test$x[k],test$y[k]), c(x[1], x[2])))
)
)
)
}
class_separation <- c(class_separation, vec)
j <- j + 1
}
endtime <- Sys.time()
stopwatch <- endtime - starttime
closeAllConnections()
registerDoSEQ()
gc()
df <- cbind(df, class_separation)
When handling single plays or small batches, this code seems to operate just fine. However, when handling the full dataset I am getting results that are obviously incorrect. I know there must be a flaw in how I am calculating these distances since there is very little chance the dist() function itself or %dopar% is at fault. I have changed to %do% and my results do not change.
As an example of the discrepancy, the following image shows the class_separation column from when the full data run is conducted vs when I feed it a small example. As you can see the results are wildly different, but I'm not sure why.

After a day of thinking about this, the problem is in how I was sending my df to dist().
For example, if we intended to pass
dist(rbind(c(1, 1), c(6, 6)))
dist(rbind(c(1, 1), c(9, 9)))
What we actually pass is dist(rbind(c(1, 1), c(6, 6, 9, 9)))
This is obviously not what I want. I needed both distances and then to select the minimum between them or add in other conditionals. The way to do this I found was using the rdist package.
foreach(i = 1:nrow(df), .combine = 'c', .multicombine = F, .packages = c('tidyverse',
'rdist')) %dopar% {
min(
cdist(
df[df$class != df$class[i] & df$uniqueId == df$uniqueId[i], ] %>% select(x, y),
df %>% select(x, y) %>% slice(i)
)
)
}
For our test data this returns the vector
Inf Inf Inf 2.828427 1.414214 1.414214 1.414214 1.414214 1.414214
Which is exactly what I needed. The first three entries having no class == 1 options for their uniqueId should return Inf, row 4 is twice as far from row 6 as row 5 while all having the same uniqueId, while row 9 is equally distance to rows 8 and 10. Whether this solution will be sufficiently fast I will test out.

Related

How to create list of functions with multiple parameters from dataframes in R?

Long time reader, first time poster. I have not found any previous questions about my current problem. I would like to create multiple linear functions, which I can later apply to variables. I have a data frame of slopes: df_slopes and a data frame of constants: df_constants.
Dummy data:
df_slope <- data.frame(var1 = c(1, 2, 3,4,5), var2 = c(2,3,4,5,6), var3 = c(-1, 1, 0, -10, 1))
df_constant<- data.frame(var1 = c(3, 4, 6,7,9), var2 = c(2,3,4,5,6), var3 = c(-1, 7, 8, 0, -1))
I would like to construct functions such as
myfunc <- function(slope, constant, trvalue){
result <- trvalue*slope+constant
return(result)}
where the slope and constant values are
slope<- df_slope[i,j]
constant<- df_constant[i,j]
I have tried many ways, for example like this, creating a dataframe of functions with for loop
myfunc_all<-data.frame()
for(i in 1:5){
for(j in 1:3){
myfunc_all[i,j]<-function (x){ x*df_slope[i,j]+df_constant[i,j] }
full_func[[i]][j]<- func_full
}
}
without success. The slope-constant values are paired up, such as df_slope[i,j] is paired with df_constant[i,j]. The desired end result would be some kind of data frame, from where I can call a function by giving it the coordinates, for example like this:
myfunc_all[i,j}
but any form would be great. For example
myfunc_all[2,1]
in our case would be
function (x){ x*2+4]
which I can apply to different x values. I hope my problem is clear.
So you have a slight problem with lazy evaluation and variable scopes when you are using a for loop to build functions (see here for more info). It's a bit safer to use something like mapply which will create closures for you. Try
myfunc_all <- with(expand.grid(1:5, 1:3), mapply(function(i, j) {
function(x) {
x*df_slope[i,j]+df_constant[i,j]
}
},Var1, Var2))
dim(myfunc_all) <- c(5,3)
This will create an array like object. The only difference is that you need to use double brackets to extract the function. For example
myfunc_all[[2,1]](0)
# [1] 4
myfunc_all[[5,3]](0)
# [1] -1
Alternative you can choose to write a function that returns a function. That would look like
myfunc_all <- (function(slopes, constants) {
function(i, j)
function(x) x*slopes[i,j]+constants[i,j]
})(df_slope, df_constant)
then rather than using brackets, you call the function with parenthesis.
myfunc_all(2,1)(0)
# [1] 4
myfunc_all(5,3)(0)
# [1] -1
df_slope <- data.frame(var1 = c(1, 2, 3,4,5), var2 = c(2,3,4,5,6), var3 = c(-1, 1, 0, -10, 1))
df_constant<- data.frame(var1 = c(3, 4, 6,7,9), var2 = c(2,3,4,5,6), var3 = c(-1, 7, 8, 0, -1))
functions = vector(mode = "list", length = nrow(df_slope))
for (i in 1:nrow(df_slope)) {
functions[[i]] = function(i,x) { df_slope[i]*x + df_constant[i]}
}
f = function(i, x) {
functions[[i]](i, x)
}
f(1, 1:10)
f(3, 5:10)

Problem with row-wise operation in base R

I have a problem with performing row-wise operations using 'apply' function in R. I want to calculate the distance between two points:
d <- function(x,y){
length <- norm(x-y,type="2")
as.numeric(length)
}
The coordinates are given by two dataframes:
start <- data.frame(
a = c(7, 5, 17, 1),
b = c(5, 17, 1, 2))
stop <- data.frame(
b = c(5, 17, 1, 2),
c = c(17, 1, 2, 1))
My point is to calculate successive distances given by start and stop coordiantes. I wish it worked like:
d(start[1,], stop[1,])
d(start[2,], stop[2,])
d(start[3,], stop[3,])
etc...
I have tried:
apply(X = start, MARGIN = 1, FUN = d, y = stop)
which brought some strange results. Can you please help me with finding the proper solution? I know how to perform the operation using dplyr rowwise() function, however my wish is to use base only.
Can you also explain me why did I receive such a strange results with apply()?
Loop over the sequence of rows and apply the d
sapply(seq_len(nrow(start)), function(i) d(start[i,], stop[i,]))
[1] 12.165525 20.000000 16.031220 1.414214
Or if we want to use apply, create a single data by cbinding the two data and then subset by indexing
apply(cbind(start, stop), 1, FUN = function(x) d(x[1:2], x[3:4]))
[1] 12.165525 20.000000 16.031220 1.414214
Or may use dapply for efficiency
library(collapse)
dapply(cbind(start, stop), MARGIN = 1, parallel = TRUE,
FUN = function(x) d(x[1:2], x[3:4]))
[1] 12.165525 20.000000 16.031220 1.414214

fast way to create a transition frequency table

I have a table showing sequences of patterns, represented with the id of the sequence, and the ordered value of the consecutive patterns over several rows. patterns have all the same length but sequences have different lengths)
patterns_seq_a <- c("ABC", "BCD", "ABC", "CBA")
patterns_seq_b <- c("BCD", "BCC", "BAC", "ABC", "BCD")
patterns_seq_c <- c("ABC", "ABC")
sequence_df <- data.frame(sequence_id = c(rep("a", length(patterns_seq_a)),
rep("b", length(patterns_seq_b)),
rep("c", length(patterns_seq_c))),
pattern = c(patterns_seq_a, patterns_seq_b, patterns_seq_c))
and I'm trying to count the frequency from one pattern to another over all the sequences.
(and ultimately will build a probability of transitions from the frequency table)
so for example looking at the transition from "CBA" to "BCD" its frequency over all sequences would be 0 (NB the row 4 and 5 don't belong to the same sequence).
The end result would have the form of:
unique_patterns <- unique(sequence_df$pattern)
result <- data.frame(matrix(0, ncol = length(unique_patterns), nrow = length(unique_patterns)))
colnames(result) <- unique_patterns
rownames(result) <- unique_patterns
(but could also be a hash pattern -> pattern -> frequency if it has to)
On the web I've found a solution based on concatenations of sequences then greps but I it's unusable as it's too slow (profvis blames the greps). Something on the lines of this:
freq_table <- c()
for (start_pattern in unique_patterns) {
for (end_pattern in unique_patterns) {
transition_pattern <- paste0(start_pattern, ',', end_pattern)
sequence_holding_transition <- concat_sequence_df[grep(transition_pattern, concat_sequence_df$patterns_sequence),]
if (nrow(sequence_holding_transition) < 1) {
transition_frequency <- c(transition_pattern, 0)
} else {
concat_sequence_holding_transition <- paste0(sequence_holding_transition$patterns_sequence, collapse = ",", sep="/")
transition_pattern_positions <- gregexpr(pattern = transition_pattern, text = concat_sequence_holding_transition)[[1]]
transition_frequency <- c(transition_pattern, length(transition_pattern_positions))
}
freq_table <- rbind(freq_table, transition_frequency)
}
}
frequency_table <- data.frame(pattern_transition = freq_table[, 1], counts = freq_table[, 2])
frequency_table$pattern.from <- sapply(strsplit(as.character(frequency_table$pattern_transition), ","), `[`, 1)
frequency_table$pattern.to <- sapply(strsplit(as.character(frequency_table$pattern_transition), ","), `[`, 2)
frequency_table <- t(matrix(as.numeric(as.character(frequency_table$counts)), ncol=length(unique_patterns), nrow=length(unique_patterns)))
colnames(frequency_table) <- unique_patterns
rownames(frequency_table) <- unique_patterns
I haven't been able to vectorize it and those greps are really slow in any case. I tried parallelizing it but it's still too slow.
Would someone have a lightning fast solution of building this frequency table?
I don't really care about the code sample, it's just here to show an example of a - unusable- solution.
Thanks!
UPDATE:
please find bellow the expected output if I know how to count.
To have lines with no transitions (such as "CBA" to "XXX") dropped is acceptable, and ofc it doesn't matter the actual type of the structure:
structure(
list(
ABC = c(1, 1, 0, 1, 0),
BAC = c(0, 0, 1, 0, 0),
BCC = c(0, 0, 0, 1, 0),
BCD = c(2, 0, 0, 0, 0),
CBA = c(1, 0, 0, 0, 0)
),
row.names = c("ABC", "BAC", "BCC", "BCD", "CBA"),
class = "data.frame"
)
I have not benchmarked it but it seems, that the following solutions uses basic R functions that tend to usually be fast. Given sequence_df from the question:
table(unlist(tapply(sequence_df$pattern, sequence_df$sequence_id, FUN =
function(p) paste0(p[-length(p)], p[-1]))))
I am using tapply to examine each sequence_id on it's own and paste0 for the transition patterns. unlist puts it all in one large vector which table can count. I am not shure, whether one would call that fully vectorized but at least it has no nested for loops and no regular expressions.
It certainly lacks code to transform the table type into a matrix right now. That can be written once we know how it compares runtime wise with other solutions and depending on whether the matrix is really the ideal format for the operations to come after that.
ok so I had a look at the solutions from #Berhard and #user20650 and while I haven't seriously validated the results yet, it looks that they both do the job.
one difference is that the tapply doesn't generate the 0 frequency transitions while data.table does.
Both solutions are faster than the nested for / rbind example as expected.
So I have tried to benchmark both, and assuming no mistake on my side adapting your code, the data.table is more than twice faster than the tapply one.
Thank you both for these elegant answers, appreciated!
library(data.table)
library(tidyr)
library(microbenchmark)
patterns_sequences <- lapply(seq(1:8), function(id) {
length_sequence <- sample(3:10, 1);
do.call(paste0, replicate(3, sample(c("A", "B", "C"), length_sequence, TRUE), FALSE))
})
sequence_df <- data.frame(sequence_id = c(rep("a", length(patterns_sequences[[1]])),
rep("b", length(patterns_sequences[[2]])),
rep("c", length(patterns_sequences[[3]])),
rep("d", length(patterns_sequences[[4]])),
rep("e", length(patterns_sequences[[5]])),
rep("f", length(patterns_sequences[[6]])),
rep("g", length(patterns_sequences[[7]])),
rep("h", length(patterns_sequences[[8]]))),
pattern = unlist(patterns_sequences))
build_frequency_table_base <- function(sequence_df) {
ft0 <- as.data.frame(table(unlist(tapply(sequence_df$pattern, sequence_df$sequence_id, FUN =
function(p) paste0(p[-length(p)], ",", p[-1])))), stringsAsFactors = FALSE)
ft1 <- ft0 %>%
tidyr::separate(Var1, c("from_pattern", "to_pattern"), ",")
ft5 <- tidyr::spread(ft1, to_pattern, Freq, fill= 0)
rownames(ft5) <- ft5$from_pattern
ft5$from_pattern <- NULL
ft5
}
build_frequency_table_dt <- function(sequence_df) {
dt = as.data.table(sequence_df);
dt[, pattern := factor(pattern)];
dt[, pl := shift(pattern), by=sequence_id][ ,pl := factor(pl, level=levels(pattern))];
res_dt <- with(dt, table(pl, pattern))
res_dt <- as.data.frame.matrix(res_dt)
}
tictoc::tic("base")
res_base <- build_frequency_table_base(sequence_df)
tictoc::toc()
tictoc::tic("DT")
res_dt <- build_frequency_table_dt(sequence_df)
tictoc::toc()
(bench = microbenchmark::microbenchmark(
build_frequency_table_base(sequence_df),
res_dt <- build_frequency_table_dt(sequence_df),
times=1000L
))
ggplot2::autoplot(bench)

How to assign values quickly in r

I want to run a partition regression in R, for which I need to assign a factor to indicate which partition this data belongs to. For example, when it is greater than mean+2 standard deviations,I assign the indicator 2, and between mean+1sd and mean+2sd, 1 so on and so forth. I know it can be done by if and else. But when the partitions are way too much, the code seems to be too long. Is there any easy and succinct methods to accomplish it?
mean=mean(x)
sd=sd(x)
if((x[i]-mean)/sd< -3) signal[i]=-3
if((x[i]-mean)/sd> -3) signal[i]=-2
if((x[i]-mean)/sd> -2) signal[i]=-1
if((x[i]-mean)/sd> -1) signal[i]=0
if((x[i]-mean)/sd>1) signal[i]=1
if((x[i]-mean)/sd>2) signal[i]=2
if((x[i]-mean)/sd>3) signal[i]=3
}
Thanks for #jogo and #r.user.05apr.
Now I have a slightly different problem. I want to compute the partition based on rolling windows, 20 days for example, which means I need to scale the data of day t based on the past 20 days (day t-20 to day t-1) and assign the same values as above according to its z score. In such case, can cut function still be used? I have written a code with a loop and if sentences
signal <- vector()
n=20 #window
for(i in (n+1):length(x)){
mean=mean(x[(n-20):(n-1)])
sd=sd(x[(i-20):(i-1)])
if((x[i]-mean)/sd< -3) signal[i]=-3
if((x[i]-mean)/sd> -3) signal[i]=-2
if((x[i]-mean)/sd> -2) signal[i]=-1
if((x[i]-mean)/sd> -1) signal[i]=0
if((x[i]-mean)/sd>1) signal[i]=1
if((x[i]-mean)/sd>2) signal[i]=2
if((x[i]-mean)/sd>3) signal[i]=3
}
You can use cut()
x <- iris$Petal.Length
m <- mean(x)
s <- sd(x)
cut((x - m)/s, breaks = c(-Inf, -3, -2, -1, 1, 2, 3, +Inf), labels = c((-3):3))
to coerce to numeric:
as.numeric(as.character(cut((x - m)/s, breaks = c(-Inf, -3, -2, -1, 1, 2, 3, +Inf), labels = c((-3):3))))
remark:
You can shorten (x - m)/s to scale(x)
Depends on how dynamic the value assignment has to be. Alternative option:
criteria <- data.frame(operator = c("<", rep(">", 6)),
criterion = c(-3, seq(-3, -1, 1), 1:3),
result = c(seq(-3, 0, 1), 1:3),
stringsAsFactors = FALSE)
criteria # data frame with individual conditions for if
get_signal <- function(mean, sd, x) {
dummy <- (x-mean)/sd
for (i in (1:nrow(criteria))) {
if (do.call(criteria[i, 1], list(dummy, criteria[i, 2]))) res <- criteria[i, 3]
}
res
}
sapply(-5:10, function(x) get_signal(2, 1, x))

For Loop in R to Transform Each Column Using Box Cox from the Cars Package

I am working on an assignment for school. I need to transform the columns in a data frame using a for loop and the bcPower function from the cars package. My data frame named bb2.df consists of 13 columns of baseball statistics for 337 players. The data is from:
http://ww2.amstat.org/publications/jse/datasets/baseball.dat.txt
I read the data in using:
bb.df <- read.fwf("baseball.dat.txt",widths=c(4,6,6,4,4,3,3,3,4,4,4,3,3,2,2,2,2,19))
And then I created a second data frame just for the numeric stats using:
bb2.df <- bb.df[,1:13]
library(cars)
Then I unsuccessfully tried to build the for loop.
> bb2.df[[i]] <- bcPower(bb2.df[[i]],c)
> for (i in 1:ncol(bb2.df)) {
+ c <- coef(powerTransform(bb2.df[[i]]))
+ bb2.df[[i]] <- bcPower(bb2.df[[i]],c)
+ }
Error in bc1(out[, j], lambda[j]) :
First argument must be strictly positive.
The loop seems to transform the first three columns but stops.
What am I doing wrong?
This solution
tests whether a column appears to contain logical values and omits them from the transformation
replaces zero values in the vectors with a small number, outside the range of the actual values
stores the transformed values in a new data frame, retaining the column and row names
I have also tested all of the variables for normality before and after the transformation. I tried to find a variable that's interesting in that the transformed variable has a large p-value for the Shapiro test, but also there there was a large change in the p-value. Finally, the interesting variable is scaled in both the original and transformed version, and the two versions are overlaid on a density plot.
library(car); library(ggplot2); library(reshape2)
# see this link for column names and type hints
# http://ww2.amstat.org/publications/jse/datasets/baseball.txt
# add placeholder column for opening quotation mark
bb.df <-
read.fwf(
"http://ww2.amstat.org/publications/jse/datasets/baseball.dat.txt",
widths = c(4, 6, 6, 4, 4, 3, 3, 3, 4, 4, 4, 3, 3, 2, 2, 2, 2, 2, 17)
)
# remove placeholder column
bb.df <- bb.df[,-(ncol(bb.df) - 1)]
names(bb.df) <- make.names(
c(
'Salary', 'Batting average', 'OBP', 'runs', 'hits', 'doubles', 'triples',
'home runs', 'RBI', 'walks', 'strike-outs', 'stolen bases', 'errors',
"free agency eligibility", "free agent in 1991/2" ,
"arbitration eligibility", "arbitration in 1991/2", 'name'
)
)
# test for boolean/logical values... don't try to transform them
logicals.test <- apply(
bb.df,
MARGIN = 2,
FUN = function(one.col) {
asnumeric <- as.numeric(one.col)
aslogical <- as.logical(asnumeric)
renumeric <- as.numeric(aslogical)
matchflags <- renumeric == asnumeric
cant.be.logical <- any(!matchflags)
print(cant.be.logical)
}
)
logicals.test[is.na(logicals.test)] <- FALSE
probably.numeric <- bb.df[, logicals.test]
result <- apply(probably.numeric, MARGIN = 2, function(one.col)
{
# can't transform vectors containing non-positive values
# replace zeros with something small
non.zero <- one.col[one.col > 0]
small <- min(non.zero) / max(non.zero)
zeroless <- one.col
zeroless[zeroless == 0] <- small
c <- coef(powerTransform(zeroless))
transformation <- bcPower(zeroless, c)
return(transformation)
})
result <- as.data.frame(result)
row.names(result) <- bb.df$name
cols2test <- names(result)
normal.before <- sapply(cols2test, function(one.col) {
print(one.col)
temp <- shapiro.test(bb.df[, one.col])
return(temp$p.value)
})
normal.after <- sapply(cols2test, function(one.col) {
print(one.col)
temp <- shapiro.test(result[, one.col])
return(temp$p.value)
})
more.normal <- cbind.data.frame(normal.before, normal.after)
more.normal$more.normal <-
more.normal$normal.after / more.normal$normal.before
more.normal$interest <-
more.normal$normal.after * more.normal$more.normal
interesting <-
rownames(more.normal)[which.max(more.normal$interest)]
data2plot <-
cbind.data.frame(bb.df[, interesting], result[, interesting])
names(data2plot) <- c("original", "transformed")
data2plot <- scale(data2plot)
data2plot <- melt(data2plot)
names(data2plot) <- c("Var1", "dataset", interesting)
ggplot(data2plot, aes(x = data2plot[, 3], fill = dataset)) +
geom_density(alpha = 0.25) + xlab(interesting)
Original, incomplete answer:
I believe you're trying to do illegal power transformations (vectors including non-positive values, specifically zeros; vectors with no variance)
The fact that you are copying bb.df into bb2.df and then overwriting is a sure sign that you should really be using apply.
This doesn't create a useful dataframe, but it should get you started,
library(car)
bb.df <-
read.fwf(
"baseball.dat.txt",
widths = c(4, 6, 6, 4, 4, 3, 3, 3, 4, 4, 4, 3, 3, 2, 2, 2, 2, 19)
)
bb.df[bb.df == 0] <- NA
# skip last (text) col
for (i in 1:(ncol(bb.df) - 1)) {
print(i)
# use comma to indicate indexing by column
temp <- bb.df[, i]
temp[temp == 0] <- NA
temp <- temp[complete.cases(temp)]
if (length(unique(temp)) > 1) {
c <- coef(powerTransform(bb.df[, i]))
print(bcPower(bb.df[i], c))
} else {
print(paste0("column ", i, " is invariant"))
}
}
# apply solution
result <- apply(bb.df[,-ncol(bb.df)], MARGIN = 2, function(one.col)
{
temp <- one.col
temp[temp == 0] <- NA
temp <- temp[complete.cases(temp)]
if (length(unique(temp)) > 1) {
c <- coef(powerTransform(temp))
transformation <- bcPower(temp, c)
return(transformation)
} else
{
print("skipping invariant column")
return(NULL)
}
})

Resources