I am a bit stuck with this basic problem, but I cannot find a solution.
I have two data frames (dummies below):
x<- data.frame("Col1"=c(1,2,3,4), "Col2"=c(3,3,6,3))
y<- data.frame("ColA"=c(0,0,9,4), "ColB"=c(5,3,20,3))
I need to use the location of the median value of one column in df x to then retrieve a value from df y. For this, I am trying to get the row number of the median value in e.g. x$Col1 to then retrieve the value using something like y[,"ColB"][row.number]
is there an elegant way/function for doing this? Solutions might need to account for two cases - when the sample has an even number of values, and ehwn this is uneven (when numbers are even, the median value might be one that is not found in the sample as a result of calculating the mean of the two values in the middle)
The problem is a little underspecified.
What should happen when the median isn't in the data?
What should happen if the median appears in the data multiple times?
Here's a solution which takes the (absolute) difference between each value and the median, then returns the index of the first row for which that difference vector achieves its minimum.
with(x, which.min(abs(Col1 - median(Col1))))
# [1] 2
The quantile function with type = 1 (i.e. no averaging) may also be of interest, depending on your desired behavior. It returns the lower of the two "sides" of the median, while the which.min method above can depend on the ordering of your data.
quantile(x$Col1, .5, type = 1)
# 50%
# 2
An option using quantile is
with(x, which(Col1 == quantile(Col1, .5, type = 1)))
# [1] 2
This could possibly return multiple row-numbers.
Edit:
If you want it to only return the first match, you could modify it as shown below
with(x, which.min(Col1 != quantile(Col1, .5, type = 1)))
Here, something like y$ColB[which(x$Col1 == round(median(x$Col1)))] would do the trick.
The problem is x has an even number of rows, so the median 2.5 is not an integer. In this case you have to choose between 2 or 3.
Note: The above works for your example, not for general cases (e.g. c(-2L,2L) or with rational numbers). For the more general case see #IceCreamToucan's solution.
I currently am using the which.max() function in R within a loop. Sometimes, I have a vector which contains the same elements, like:
vec <- c(0,0,2,0,2)
The function will then always return:
> which.max(vec)
[1] 3
I am wondering if there is a simple solution to break ties randomly so that it doesn't always choose the smallest index among ties. I know that there is a which.is.max function in nnet, but was hoping to see if there was another simple solution without having to resort to installing extra packages. Thanks.
which(vec == max(vec)) will match all ties. You can then pick one at random using sample(which(vec == max(vec)), 1).
As you mentioned in the comments, sample does something annoying when the supplied vector is of length 1. So when there is only one maximum.
You can fix this as follows:
maxima <- which(vec == max(vec))
if(length(maxima) > 1){
maxima <- sample(maxima, 1)
}
Another method is using rank with ties.method = "random" and then we can use which.max on it.
which.max(rank(vec, ties.method = "random"))
which.max(rank(vec, ties.method = "random"))
#[1] 3
which.max(rank(vec, ties.method = "random"))
#[1] 5
rank would basically rank the vector according to their value and with ties.method = "random" it will randomly assign rank in case of a tie.
rank(vec, ties.method = "random")
#[1] 2 1 4 3 5
rank(vec, ties.method = "random")
#[1] 1 3 5 2 4
There is a concept called "pertubation", where you modify each number by a random amount that is significantly smaller than the existing variation. You can then take the maximum amount, which will be one of the original maxima plus some random amount. Which one of the original maxima will be selected is random, as it's determined by which had the largest random amount added. So for instance, if all your numbers are integers, you can convert them to floats, add a random number between 0 and .001, pick the largest one, and then round it back to int. This is probably not the most efficient method, but given that you mentioned the which.is.max in nnet, presumably you are doing work with neural networks, and pertubation is an important concept with NNs.
As alternative:
vec <- c(0,0,2,0,2)
vec %>% unique %>% sapply(function(x) which(x==vec)[sample(x=length(which(x==vec)),1)])
5Can anyone give me a hint to speed up the following program?
Situation: I have a huge amount of measurement data. I need to extract data for "10 minutes stable operation conditions" of 5 parameters i.e. column values.
Here is my (working, but really slow) solution:
- Take the first 10 rows from the dataframe
- Compare the min and max of each column to the first value of the column
- If at least one column min or max is not within tolerance, delete the first row, repeat
- If they are within tolerance, calculate the mean of the results, store them, delete 10 rows, repeat.
- break when the dataframe has less than 10 rows
Since I am using a repeat loop, this takes 30min to extract 610 operation points from 86.220 minutes of data.
Any help is appreciated. Thanks!
edit: I created some code to explain. Please note that I deleted the checking routines for na values and standby operation (values around 0):
n_cons<-5 # Number of consistent minutes?
### Function to check wheter a value is within tolerance
f_cons<-function(min,max,value,tol){
z<-max > (value + tol) | min < (value - tol);
return(z)
}
# Define the +/- tolerances
Vu_1_tol<-5 # F_HT
Vu_2_tol<-5 # F_LT
# Create empty result map
map<-c(rep(NA,3))
dim(map)<- c(1,3)
colnames(map)<-list("F_HT","F_LT","Result")
system.time(
repeat{
# Criteria to break
if(nrow(t6)<n_cons){break}
# Subset of the data to check
t_check<-NULL
t_check<-cbind(t6$F_HT[1:n_cons],
t6$F_LT[1:n_cons]
)
# Check for consistency
if(f_cons(min(t_check[,1]),max(t_check[,1]),t_check[1,1],Vu_1_tol)){t6<-t6[-1,]
next}
if(f_cons(min(t_check[,2]),max(t_check[,2]),t_check[1,2],Vu_2_tol)){t6<-t6[-1,]
next}
# If the repeat loop passes the consistency check, store the means
attach(t6[1:n_cons,])
# create a new row wih means of steady block
new_row<-c(mean(F_HT),mean(F_LT),mean(Result))
new_row[-1]<-round(as.numeric(new_row[-1]),2)
map<-rbind(map,new_row) # attach new steady point to the map
detach(t6[1:n_cons,])
t6<-t6[-(1:n_cons),] # delete the evaluated lines from the data
}
)
The data I am using looks like this
t6<-structure(list(F_HT = c(1499.71, 1500.68, 1500.44, 1500.19, 1500.31,
1501.76, 1501, 1551.22, 1500.01, 1500.52, 1499.53, 1500.78, 1500.65,
1500.96, 1500.25, 1500.76, 1499.49, 1500.24, 1500.47, 1500.25,
1735.32, 2170.53, 2236.08, 2247.48, 2250.71, 2249.59, 2246.68,
2246.69, 2248.27, 2247.79), F_LT = c(2498.96, 2499.93, 2499.73,
2494.57, 2496.94, 2507.71, 2495.67, 2497.88, 2499.63, 2506.18,
2495.57, 2504.28, 2497.38, 2498.66, 2502.17, 2497.78, 2498.38,
2501.06, 2497.75, 2501.32, 2500.79, 2498.17, 2494.82, 2499.96,
2498.5, 2503.47, 2500.57, 2501.27, 2501.17, 2502.33), Result = c(9125.5,
8891.5, 8624, 8987, 9057.5, 8840.5, 9182, 8755.5, 9222.5, 9079,
9175.5, 9458.5, 9058, 9043, 9045, 9309, 9085.5, 9230, 9346, 9234,
9636.5, 9217.5, 9732.5, 9452, 9358, 9071.5, 9063.5, 9016.5, 8591,
8447.5)), .Names = c("F_HT", "F_LT", "Result"), row.names = 85777:85806, class = "data.frame")
With this code and data, I get 3 steady operation points, which is what I want, but which is very slow.
Hopefully, this helps to better explain my problem.
Heureka!
Thanks to the comment of Carl Witthoft, I was able to speed up the proces by factor 15!
I used rollapply a lot, because rollmean and rollmax had some problems with NA which did not occur when using rollaply.
Thanks for your help!
Here is what I did I used the same data like before:
# Use only the values needed to check for stability
t7<-as.data.frame(cbind(t6$F_HT,t6$F_LT))
n_cons<-5 # Number of consistent minutes?
# Calculate the mean values for each column over 5 rows
t7_rm<-rollapply(t7,n_cons,mean,align = "left")
colnames(t7_rm)<-c("mean_F_HT","mean_F_LT")
# idem with maximum
t7_max<-rollapply(t7,width=n_cons,FUN=max, na.rm = F,align = "left")
colnames(t7_max)<-c("max_F_HT","max_F_LT")
# idem with minimum
t7_min<-rollapply(t7,width=n_cons,FUN=min, na.rm = F,align = "left")
colnames(t7_min)<-c("min_F_HT","min_F_LT")
# create table with maximum absolute daviation from the mean values
t7_dif<-pmax((t7_max-t7_rm[1:nrow(t7_max),]),(t7_rm[1:nrow(t7_min),]-t7_min))
colnames(t7_dif)<-c("diff_F_HT","diff_F_LT")
# Enter tolerance limits
V1_tol<-50 # F_HT
V2_tol<-50 # F_LT
# Create a tolerance table
t7_tol<-cbind(rep(V1_tol,nrow(t7_dif)),rep(V2_tol,nrow(t7_dif)))
# Create a logical table with TRUE or FALSE depending on if the max deviation is within tolerance
t7_check<-(t7_dif<t7_tol)
# Replace all "FALSE" with "NA" (in order to use is.na)
t7_check_NA<-apply(t7_check,c(1,2),function(x) {ifelse(x==FALSE,NA,x)})
# Create rolling mean over complete data
t6_rm<-rollapply(t6,n_cons,mean,na.rm=TRUE,align = "left")
# Create a map of stable operation points with means of parameters and result
t6_map<-t6_rm[complete.cases(t7_check_NA),]
The result differs from my original one, because no lines are omitted. But this works for me.
The language I'm using is R, but you don't necessarily need to know about R to answer the question.
Question:
I have a sequence that can be considered the ground truth, and another sequence that is a shifted version of the first, with some missing values. I'd like to know how to align the two.
setup
I have a sequence ground.truth that is basically a set of times:
ground.truth <- rep( seq(1,by=4,length.out=10), 5 ) +
rep( seq(0,length.out=5,by=4*10+30), each=10 )
Think of ground.truth as times where I'm doing the following:
{take a sample every 4 seconds for 10 times, then wait 30 seconds} x 5
I have a second sequence observations, which is ground.truth shifted with 20% of the values missing:
nSamples <- length(ground.truth)
idx_to_keep <- sort(sample( 1:nSamples, .8*nSamples ))
theLag <- runif(1)*100
observations <- ground.truth[idx_to_keep] + theLag
nObs <- length(observations)
If I plot these vectors this is what it looks like (remember, think of these as times):
What I've tried. I want to:
calculate the shift (theLag in my example above)
calculate a vector idx such that ground.truth[idx] == observations - theLag
First, assume we know theLag. Note that ground.truth[1] is not necessarily observations[1]-theLag. In fact, we have ground.truth[1] == observations[1+lagI]-theLag for some lagI.
To calculate this, I thought I'd use cross-correlation (ccf function).
However, whenever I do this I get a lag with a max. cross-correlation of 0, meaning ground.truth[1] == observations[1] - theLag. But I've tried this in examples where I've explicitly made sure that observations[1] - theLag is not ground.truth[1] (i.e. modify idx_to_keep to make sure it doesn't have 1 in it).
The shift theLag shouldn't affect the cross-correlation (isn't ccf(x,y) == ccf(x,y-constant)?) so I was going to work it out later.
Perhaps I'm misunderstanding though, because observations doesn't have as many values in it as ground.truth? Even in the simpler case where I set theLag==0, the cross correlation function still fails to identify the correct lag, which leads me to believe I'm thinking about this wrong.
Does anyone have a general methodology for me to go about this, or know of some R functions/packages that could help?
Thanks a lot.
For the lag, you can compute all the differences (distances) between your two sets of points:
diffs <- outer(observations, ground.truth, '-')
Your lag should be the value that appears length(observations) times:
which(table(diffs) == length(observations))
# 55.715382960625
# 86
Double check:
theLag
# [1] 55.71538
The second part of your question is easy once you have found theLag:
idx <- which(ground.truth %in% (observations - theLag))
The following should work if your time series are not too long.
You have two vectors of time-stamps,
the second one being a shifted and incomplete copy of the first,
and you want to find by how much it was shifted.
# Sample data
n <- 10
x <- cumsum(rexp(n,.1))
theLag <- rnorm(1)
y <- theLag + x[sort(sample(1:n, floor(.8*n)))]
We can try all possible lags and, for each one,
compute how bad the alignment is,
by matching each observed timestamp with the closest
"truth" timestamp.
# Loss function
library(sqldf)
f <- function(u) {
# Put all the values in a data.frame
d1 <- data.frame(g="truth", value=x)
d2 <- data.frame(g="observed", value=y+u)
d <- rbind(d1,d2)
# For each observed value, find the next truth value
# (we could take the nearest, on either side,
# but it would be more complicated)
d <- sqldf("
SELECT A.g, A.value,
( SELECT MIN(B.value)
FROM d AS B
WHERE B.g='truth'
AND B.value >= A.value
) AS next
FROM d AS A
WHERE A.g = 'observed'
")
# If u is greater than the lag, there are missing values.
# If u is smaller, the differences decrease
# as we approach the lag.
if(any(is.na(d))) {
return(Inf)
} else {
return( sum(d$`next` - d$value, na.rm=TRUE) )
}
}
We can now search for the best lag.
# Look at the loss function
sapply( seq(-2,2,by=.1), f )
# Minimize the loss function.
# Change the interval if it does not converge,
# i.e., if it seems in contradiction with the values above
# or if the minimum is Inf
(r <- optimize(f, c(-3,3)))
-r$minimum
theLag # Same value, most of the time