Check whether elements of vectors are inside intervals given by matrix - r

Actually a really nice problem to which I came up with a solution (see below), which is, however, not beautiful:
Assume you have a vector x and a matrix A which contains the start of an interval in the first column and the end of the interval in the second.
How can I get the elements of A, which fall into the intervals given by A?
x <- c(4, 7, 15)
A <- cbind(c(3, 9, 14), c(5, 11, 16))
Expected output:
[1] 4 15
You could you the following information, if this would be helpful for increasing the performance:
Both, the vector and the rows of the matrix are ordered and the intervals don't overlap. All intervals have the same length. All numbers are integers, but can be huge.
Now I did not want to be lazy and came up with the following solution, which is too slow for long vectors and matrices:
x <- c(4, 7, 15) # Define input vector
A <- cbind(c(3, 9, 14), c(5, 11, 16)) # Define matrix with intervals
b <- vector()
for (i in 1:nrow(A)) {
b <- c(b, A[i, 1]:A[i, 2])
}
x[x %in% b]
I know that loops in R can be slow, but I did not know how to write the operation without one (maybe there is a way with apply).

We can use sapply to loop over each element of x and find if it lies in the range of any of those matrix values.
x[sapply(x, function(i) any(i > A[, 1] & i < A[,2]))]
#[1] 4 15
In case, if length(x) and nrow(A) are same then we don't even need the sapply loop and we can use this comparison directly.
x[x > A[, 1] & x < A[,2]]
#[1] 4 15

Here is a method that does not use an explicit loop or an apply function. outer is sometimes much faster.
x[rowSums(outer(x, A[,1], `>=`) & outer(x, A[,2], `<=`)) > 0]
[1] 4 15

This answer is late, but today I had the same problem to solve and my answer is maybe helpful for future readers. My solution was the following:
f3 <- function(x,A) {
Reduce(f = "|",
x = lapply(1:NROW(A),function(k) x>A[k,1] & x<A[k,2]),
init = logical(length(x)))
}
This function return a logical vector of length(x) indicating whether the corresponding value in x can be found in the intervals or not. If I want to get the elements I simply have to write
x[f3(x,A)]
I did some benchmarks and my function seems to work very well, also while testing with larger data.
Lets define the other solutions suggested here in this post:
f1 <- function(x,A) {
sapply(x, function(i) any(i > A[, 1] & i < A[,2]))
}
f2 <- function(x,A) {
rowSums(outer(x, A[,1], `>`) & outer(x, A[,2], `<`)) > 0
}
Now they are also returning a logical vector.
The benchmarks on my machine are following:
x <- c(4, 7, 15)
A <- cbind(c(3, 9, 14), c(5, 11, 16))
microbenchmark::microbenchmark(f1(x,A), f2(x,A), f3(x,A))
#Unit: microseconds
# expr min lq mean median uq max neval
#f1(x, A) 21.5 23.20 25.023 24.30 25.40 61.8 100
#f2(x, A) 18.8 21.20 23.606 22.75 23.70 75.4 100
#f3(x, A) 13.9 15.85 18.682 18.30 19.15 52.2 100
It seems like there is no big difference, but the follwoing example will make it more obvious:
x <- seq(1,100,length.out = 1e6)
A <- cbind(20:70,(20:70)+0.5)
microbenchmark::microbenchmark(f1(x,A), f2(x,A), f3(x,A), times=10)
#Unit: milliseconds
# expr min lq mean median uq max neval
#f1(x, A) 4176.172 4227.6709 4419.6010 4484.2946 4539.9668 4569.7412 10
#f2(x, A) 1418.498 1511.5647 1633.4659 1571.0249 1703.6651 1987.8895 10
#f3(x, A) 614.556 643.4138 704.3383 672.5385 770.7751 873.1291 10
That the functions all return the same result can be checked e.g. via:
all(f1(x,A)==f3(x,A))

Related

Keeping vectors (from list of vectors) whose elements do not have a proper subset within that same list (using RCPP)

I have asked this question previously (see here) and received a satisfactory answer using the purr package. However, this has proved to be a bottle neck in my program so I would like to rewrite the section using the RCPP package.
Proper subset: A proper subset S' of a set S is a subset that is strictly contained in S and so excludes S itself (note I am also excluding the empty set).
Suppose you have the following vectors in a list:
a = c(1,2)
b = c(1,3)
c = c(2,4)
d = c(1,2,3,4)
e = c(2,4,5)
f = c(1,2,3)
My aim is to keep only vectors which have no proper subset within the list, which in this example would be a, b and c.
Previous Solution
library(purr)
possibilities <- list(a,b,c,d,e,f)
keep(possibilities,
map2_lgl(.x = possibilities,
.y = seq_along(possibilities),
~ !any(map_lgl(possibilities[-.y], function(z) all(z %in% .x)))))
The notion here is to avoid the O(N^3) and use a less order instead. The other answer provided here will be slow still since it is greater than O(N^2). Here is a solution with less than O(N^2), where the worst case scenario is O(N^2) when all the elements are unique.
onlySet <- function(x){
i <- 1
repeat{
y <- sapply(x[-1], function(el)!all(is.element(x[[1]], el)))
if(all(y)){
if(i==length(x)) break
else i <- i+1
}
x <- c(x[-1][y], x[1])
}
x
}
Now to show the time difference, check out the following:
match_fun <- Vectorize(function(s1, s2) all(s1 %in% s2))
method1 <- function(a){
mat <- outer(a, a, match_fun)
a[colSums(mat) == 1]
}
poss <- rep(possibilities, 100)
microbenchmark::microbenchmark(method1(poss), onlySet(poss))
Unit: milliseconds
expr min lq mean median uq max neval cld
method1(poss) 840.7919 880.12635 932.255030 889.36380 923.32555 1420.1077 100 b
onlySet(poss) 1.9845 2.07005 2.191647 2.15945 2.24245 3.3656 100 a
Have you tried optimising the solution in base R first? For example, the following reproduces your expected output and uses (faster) base R array routines:
match_fun <- Vectorize(function(s1, s2) all(s1 %in% s2))
mat <- outer(possibilities, possibilities, match_fun)
possibilities[colSums(mat) == 1]
#[[1]]
#[1] 1 2
#
#[[2]]
#[1] 1 3
#
#[[3]]
#[1] 2 4
Inspired by Onyambu's performant solution, here is another base R option using a recursive function
f_recursive <- function(x, i = 1) {
if (i > length(x)) return(x)
idx <- which(sapply(x[-i], function(el) all(x[[i]] %in% el))) + 1
if (length(idx) == 0) f_recursive(x, i + 1) else f_recursive(x[-idx], i + 1)
}
f(possibilities)
The performance is on par with Onyambu's solution.
poss <- rep(possibilities, 100)
microbenchmark::microbenchmark(
method1(poss),
onlySet(poss),
f_recursive(poss))
#Unit: milliseconds
# expr min lq mean median uq
# method1(poss) 682.558602 710.974831 750.325377 730.627996 765.040976
# onlySet(poss) 1.700646 1.782713 1.870972 1.819820 1.918669
# f_recursive(poss) 1.681120 1.737459 1.884685 1.806384 1.901582
# max neval
# 1200.562889 100
# 2.371646 100
# 3.217013 100

Minimum absolute difference between vector pairs (greedy algorithm)

Given a numeric vector, I'd like to find the smallest absolute difference in combinations of size 2. However, the point of friction comes with the use of combn to create the matrix holding the pairs. How would one handle issues when a matrix/vector is too large?
When the number of resulting pairs (number of columns) using combn is too large, I get the following error:
Error in matrix(r, nrow = len.r, ncol = count) :
invalid 'ncol' value (too large or NA)
This post states that the size limit of a matrix is roughly one billion rows and two columns.
Here is the code I've used. Apologies for the use of cat in my function output -- I'm solving the Minimum Absolute Difference in an Array Greedy Algorithm problem in HackerRank and R outputs are only counted as correct if they're given using cat:
minimumAbsoluteDifference <- function(arr) {
combos <- combn(arr, 2)
cat(min(abs(combos[1,] - combos[2,])))
}
# This works fine
input0 <- c(3, -7, 0)
minimumAbsoluteDifference(input0) #returns 3
# This fails
inputFail <- rpois(10e4, 1)
minimumAbsoluteDifference(inputFail)
#Error in matrix(r, nrow = len.r, ncol = count) :
# invalid 'ncol' value (too large or NA)
TL;DR
No need for combn or the like, simply:
min(abs(diff(sort(v))))
The Nitty Gritty
Finding the difference between every possible combinations is O(n^2). So when we get to vectors of length 1e5, the task is burdensome both computationally and memory-wise.
We need a different approach.
How about sorting and taking the difference only with its neighbor?
By first sorting, for any element vj, the difference min |vj - vj -/+ 1| will be the smallest such difference involving vj. For example, given the sorted vector v:
v = -9 -8 -6 -4 -2 3 8
The smallest distance from -2 is given by:
|-2 - 3| = 5
|-4 - -2| = 2
There is no need in checking any other elements.
This is easily implemented in base R as follows:
getAbsMin <- function(v) min(abs(diff(sort(v))))
I'm not going to use rpois as with any reasonably sized vector, duplicates will be produces, which will trivially give 0 as an answer. A more sensible test would be with runif or sample (minimumAbsoluteDifference2 is from the answer provided by #RuiBarradas):
set.seed(1729)
randUnif100 <- lapply(1:100, function(x) {
runif(1e3, -100, 100)
})
randInts100 <- lapply(1:100, function(x) {
sample(-(1e9):(1e9), 1e3)
})
head(sapply(randInts100, getAbsMin))
[1] 586 3860 2243 2511 5186 3047
identical(sapply(randInts100, minimumAbsoluteDifference2),
sapply(randInts100, getAbsMin))
[1] TRUE
options(scipen = 99)
head(sapply(randUnif100, getAbsMin))
[1] 0.00018277206 0.00020549633 0.00009834766 0.00008395873 0.00005299225 0.00009313226
identical(sapply(randUnif100, minimumAbsoluteDifference2),
sapply(randUnif100, getAbsMin))
[1] TRUE
It's very fast as well:
library(microbenchmark)
microbenchmark(a = getAbsMin(randInts100[[50]]),
b = minimumAbsoluteDifference2(randInts100[[50]]),
times = 25, unit = "relative")
Unit: relative
expr min lq mean median uq max neval
a 1.0000 1.0000 1.0000 1.0000 1.00000 1.00000 25
b 117.9799 113.2221 105.5144 107.6901 98.55391 81.05468 25
Even for very large vectors, the result is instantaneous;
set.seed(321)
largeTest <- sample(-(1e12):(1e12), 1e6)
system.time(print(getAbsMin(largeTest)))
[1] 3
user system elapsed
0.083 0.003 0.087
Something like this?
minimumAbsoluteDifference2 <- function(x){
stopifnot(length(x) >= 2)
n <- length(x)
inx <- rep(TRUE, n)
m <- NULL
for(i in seq_along(x)[-n]){
inx[i] <- FALSE
curr <- abs(x[i] - x[which(inx)])
m <- min(c(m, curr))
}
m
}
# This works fine
input0 <- c(3, -7, 0)
minimumAbsoluteDifference(input0) #returns 3
minimumAbsoluteDifference2(input0) #returns 3
set.seed(2020)
input1 <- rpois(1e3, 1)
minimumAbsoluteDifference(input1) #returns 0
minimumAbsoluteDifference2(input1) #returns 0
inputFail <- rpois(1e5, 1)
minimumAbsoluteDifference(inputFail) # This fails
minimumAbsoluteDifference2(inputFail) # This does not fail

Trouble with R and vector length NA values appended

I've looked this over and I can't quite understand why this is giving me NA values appended onto the vector I want. Prompt below:
"The function should return a vector where the first element is the sum of the first n elements of the input vector, and the rest of the vector is a copy of the other elements of the input vector. For example, if the input vector is (2, 3, 6, 7, 8) and n = 2, then the output should be the vector (5, 6, 7, 8)"
testA<- c(1,2,3,4,5)
myFunction <- function(vector1, n)
{
sum1=0
for(i in 1:n)
{
sum1<-sum1+vector1[i]
newVector<-c(sum1,vector1[n+1:length(vector1)])
}
return(newVector)
}
print(newVector)
myFunction(testA, 3)
Output is: [1] 6 4 5 NA NA NA when it should just be 6 4 5
There is no need for a for loop here; you can do something like this
test <- c(2, 3, 6, 7, 8)
myfunction <- function(x, n) c(sum(x[1:n]), x[-(1:n)])
myfunction(test, 2)
#[1] 5 6 7 8
testA <- c(1,2,3,4,5)
myfunction(testA, 3)
#[1] 6 4 5
Explanation: sum(x[1:n]) calculates the sum of the first n elements of x and x[-(1:n)] returns x with the first n elements removed.
It can be done with head and tail
n <- 2
c(sum(head(test, 2)), tail(test, -2))
#[1] 5 6 7 8
data
test <- c(2, 3, 6, 7, 8)
Here I try to compare the efficiency of above two functions, which are answer post https://stackoverflow.com/a/52472214/3806250 with the question post.
> testA <- 1:5
> myFunction <- function(vector1, n) {
+ sum1 <- 0
+ for(i in 1:n) {
+ sum1 <- sum1 + vector1[i]
+ newVector <- c(sum1, vector1[n+1:length(vector1)])
+ }
+ newVector <- newVector[!is.na(newVector)]
+ return(newVector)
+ }
>
> microbenchmark::microbenchmark(myFunction(testA, 3))
Unit: microseconds
expr min lq mean median uq max neval
myFunction(testA, 3) 3.592 4.1055 77.37798 4.106 4.619 7292.85 100
>
> myfunction <- function(x, n) c(sum(x[1:n]), x[-(1:n)])
>
> microbenchmark::microbenchmark(myfunction(testA, 2))
Unit: microseconds
expr min lq mean median uq max neval
myfunction(testA, 2) 1.539 1.54 47.04373 2.053 2.053 4462.644 100
Thank you for everyone's answers! I was really tired last night and couldn't come up with this simple solution:
function(vector1, n)
{
sum1=0
for(i in 1:n) #scans input vector from first element to input 'n' element
{
sum1<-sum1+vector1[i]#Find sum of numbers scanned
newVector<-c(sum1,vector1[n+1:length(vector1)])#new output vector starting with the sum found then concatonates rest of the original vector after 'n' element
length(newVector)<-(length(newVector)-(n)) #NA values were returned, length needs to be changed with respect to 'n'
}
return(newVector)
print(newVector)
}
There are already great solutions, but here is another option which does minimum modifications on you original code:
testA<- c(1,2,3,4,5)
myFunction <- function(vector1, n)
{
sum1=0
for(i in 1:n)
{
sum1<-sum1+vector1[i]
}
newVector<-c(sum1,vector1[(n+1):length(vector1)]) # we take this line out of the for loop
# and put the n+1 in between parenthesis
return(newVector)
}
newVector <- myFunction(testA, 3)
print(newVector)
The problem on the original code/example was that n+1:length(vector1) was supossed to return [1] 4 5, in order to do the appropiate subsetting (obtaining the last elements in the vector which weren't included in the sum of the first n elements), but it is actually returning [1] 4 5 6 7 8. Since there are no elements in positions 6:8 in testA, this is the reason why there are appearing missing values/NAs.
What n+1:length(vector1) is actually doing is first obtaining the secuence 1:length(vector1) and then adding n to each element. Here is an example of this behaviour using values:
3+1:5
#> [1] 4 5 6 7 8
We can solve this by putting n+1 between parenthesis on the original code. In our example using values:
(3+1):5
#> [1] 4 5
Also, taking the assignment of newVector out of the loop improves performance, because the binding between sum1 and the subsetted vector only needs to be done once the sum of the first n elements is completed.

Find the second minimum value in R [duplicate]

R offers max and min, but I do not see a really fast way to find another value in the order, apart from sorting the whole vector and then picking a value x from this vector.
Is there a faster way to get the second highest value, for example?
Use the partial argument of sort(). For the second highest value:
n <- length(x)
sort(x,partial=n-1)[n-1]
Slightly slower alternative, just for the records:
x <- c(12.45,34,4,0,-234,45.6,4)
max( x[x!=max(x)] )
min( x[x!=min(x)] )
Rfast has a function called nth_element that does exactly what you ask.
Further the methods discussed above that are based on partial sort, don't support finding the k smallest values
Update (28/FEB/21) package kit offers a faster implementation (topn) see https://stackoverflow.com/a/66367996/4729755, https://stackoverflow.com/a/53146559/4729755
Disclaimer: An issue appears to occur when dealing with integers which can by bypassed by using as.numeric (e.g. Rfast::nth(as.numeric(1:10), 2)), and will be addressed in the next update of Rfast.
Rfast::nth(x, 5, descending = T)
Will return the 5th largest element of x, while
Rfast::nth(x, 5, descending = F)
Will return the 5th smallest element of x
Benchmarks below against most popular answers.
For 10 thousand numbers:
N = 10000
x = rnorm(N)
maxN <- function(x, N=2){
len <- length(x)
if(N>len){
warning('N greater than length(x). Setting N=length(x)')
N <- length(x)
}
sort(x,partial=len-N+1)[len-N+1]
}
microbenchmark::microbenchmark(
Rfast = Rfast::nth(x,5,descending = T),
maxn = maxN(x,5),
order = x[order(x, decreasing = T)[5]])
Unit: microseconds
expr min lq mean median uq max neval
Rfast 160.364 179.607 202.8024 194.575 210.1830 351.517 100
maxN 396.419 423.360 559.2707 446.452 487.0775 4949.452 100
order 1288.466 1343.417 1746.7627 1433.221 1500.7865 13768.148 100
For 1 million numbers:
N = 1e6
x = rnorm(N)
microbenchmark::microbenchmark(
Rfast = Rfast::nth(x,5,descending = T),
maxN = maxN(x,5),
order = x[order(x, decreasing = T)[5]])
Unit: milliseconds
expr min lq mean median uq max neval
Rfast 89.7722 93.63674 114.9893 104.6325 120.5767 204.8839 100
maxN 150.2822 207.03922 235.3037 241.7604 259.7476 336.7051 100
order 930.8924 968.54785 1005.5487 991.7995 1031.0290 1164.9129 100
I wrapped Rob's answer up into a slightly more general function, which can be used to find the 2nd, 3rd, 4th (etc.) max:
maxN <- function(x, N=2){
len <- length(x)
if(N>len){
warning('N greater than length(x). Setting N=length(x)')
N <- length(x)
}
sort(x,partial=len-N+1)[len-N+1]
}
maxN(1:10)
Here is an easy way to find the indices of N smallest/largest values in a vector(Example for N = 3):
N <- 3
N Smallest:
ndx <- order(x)[1:N]
N Largest:
ndx <- order(x, decreasing = T)[1:N]
So you can extract the values as:
x[ndx]
For nth highest value,
sort(x, TRUE)[n]
Here you go... kit is the obvious winner!
N = 1e6
x = rnorm(N)
maxN <- function(x, N=2){
len <- length(x)
if(N>len){
warning('N greater than length(x). Setting N=length(x)')
N <- length(x)
}
sort(x,partial=len-N+1)[len-N+1]
}
microbenchmark::microbenchmark(
Rfast = Rfast::nth(x,5,descending = T),
maxN = maxN(x,5),
order = x[order(x, decreasing = T)[5]],
kit = x[kit::topn(x, 5L,decreasing = T)[5L]]
)
# Unit: milliseconds
# expr min lq mean median uq max neval
# Rfast 12.311168 12.473771 16.36982 12.702134 16.110779 102.749873 100
# maxN 12.922118 13.124358 17.49628 18.977537 20.053139 28.928694 100
# order 50.443100 50.926975 52.54067 51.270163 52.323116 66.561606 100
# kit 1.177202 1.216371 1.29542 1.240228 1.297286 2.771715 100
Edit: I forgot that kit::topn has hasna option...let's do another run.
microbenchmark::microbenchmark(
Rfast = Rfast::nth(x,5,descending = T),
maxN = maxN(x,5),
order = x[order(x, decreasing = T)[5]],
kit = x[kit::topn(x, 5L,decreasing = T)[5L]],
kit2 = x[kit::topn(x, 5L,decreasing = T,hasna = F)[5L]],
unit = "ms"
)
# Unit: milliseconds
# expr min lq mean median uq max neval
# Rfast 13.194314 13.358787 14.7227116 13.4560340 14.551194 24.524105 100
# maxN 7.378960 7.527661 10.0747803 7.7119715 12.217756 67.409526 100
# order 50.088927 50.488832 52.4714347 50.7415680 52.267003 70.062662 100
# kit 1.180698 1.217237 1.2975441 1.2429790 1.278243 3.263202 100
# kit2 0.842354 0.876329 0.9398055 0.9109095 0.944407 2.135903 100
Here is the simplest way I found,
num <- c(5665,1615,5154,65564,69895646)
num <- sort(num, decreasing = F)
tail(num, 1) # Highest number
head(tail(num, 2),1) # Second Highest number
head(tail(num, 3),1) # Third Highest number
head(tail(num, n),1) # Generl equation for finding nth Highest number
I found that removing the max element first and then do another max runs in comparable speed:
system.time({a=runif(1000000);m=max(a);i=which.max(a);b=a[-i];max(b)})
user system elapsed
0.092 0.000 0.659
system.time({a=runif(1000000);n=length(a);sort(a,partial=n-1)[n-1]})
user system elapsed
0.096 0.000 0.653
dplyr has the function nth, where the first argument is the vector and the second is which place you want. This goes for repeating elements as well.
For example:
x = c(1,2, 8, 16, 17, 20, 1, 20)
Finding the second largest value:
nth(unique(x),length(unique(x))-1)
[1] 17
When I was recently looking for an R function returning indexes of top N max/min numbers in a given vector, I was surprised there is no such a function.
And this is something very similar.
The brute force solution using base::order function seems to be the easiest one.
topMaxUsingFullSort <- function(x, N) {
sort(x, decreasing = TRUE)[1:min(N, length(x))]
}
But it is not the fastest one in case your N value is relatively small compared to length of the vector x.
On the other side if the N is really small, you can use base::whichMax function iteratively and in each iteration you can replace found value by -Inf
# the input vector 'x' must not contain -Inf value
topMaxUsingWhichMax <- function(x, N) {
vals <- c()
for(i in 1:min(N, length(x))) {
idx <- which.max(x)
vals <- c(vals, x[idx]) # copy-on-modify (this is not an issue because idxs is relative small vector)
x[idx] <- -Inf # copy-on-modify (this is the issue because data vector could be huge)
}
vals
}
I believe you see the problem - the copy-on-modify nature of R. So this will perform better for very very very small N (1,2,3) but it will rapidly slow down for larger N values. And you are iterating over all elements in vector x N times.
I think the best solution in clean R is to use partial base::sort.
topMaxUsingPartialSort <- function(x, N) {
N <- min(N, length(x))
x[x >= -sort(-x, partial=N)[N]][1:N]
}
Then you can select the last (Nth) item from the result of functions defiend above.
Note: functions defined above are just examples - if you want to use them, you have to check/sanity inputs (eg. N > length(x)).
I wrote a small article about something very similar (get indexes of top N max/min values of a vector) at http://palusga.cz/?p=18 - you can find here some benchmarks of similar functions I defined above.
head(sort(x),..) or tail(sort(x),...) should work
This will find the index of the N'th smallest or largest value in the input numeric vector x. Set bottom=TRUE in the arguments if you want the N'th from the bottom, or bottom=FALSE if you want the N'th from the top. N=1 and bottom=TRUE is equivalent to which.min, N=1 and bottom=FALSE is equivalent to which.max.
FindIndicesBottomTopN <- function(x=c(4,-2,5,-77,99),N=1,bottom=FALSE)
{
k1 <- rank(x)
if(bottom==TRUE){
Nindex <- which(k1==N)
Nindex <- Nindex[1]
}
if(bottom==FALSE){
Nindex <- which(k1==(length(x)+1-N))
Nindex <- Nindex[1]
}
return(Nindex)
}
topn = function(vector, n){
maxs=c()
ind=c()
for (i in 1:n){
biggest=match(max(vector), vector)
ind[i]=biggest
maxs[i]=max(vector)
vector=vector[-biggest]
}
mat=cbind(maxs, ind)
return(mat)
}
this function will return a matrix with the top n values and their indices.
hope it helps
VDevi-Chou
You can identify the next higher value with cummax(). If you want the location of the each new higher value for example you can pass your vector of cummax() values to the diff() function to identify locations at which the cummax() value changed. say we have the vector
v <- c(4,6,3,2,-5,6,8,12,16)
cummax(v) will give us the vector
4 6 6 6 6 6 8 12 16
Now, if you want to find the location of a change in cummax() you have many options I tend to use sign(diff(cummax(v))). You have to adjust for the lost first element because of diff(). The complete code for vector v would be:
which(sign(diff(cummax(v)))==1)+1
You can use the sort keyword like this:
sort(unique(c))[1:N]
Example:
c <- c(4,2,44,2,1,45,34,2,4,22,244)
sort(unique(c), decreasing = TRUE)[1:5]
will give the first 5 max numbers.

R: Getting indices of elements in a sorted vector

I have a sorted vector, let's say
v <- c(1, 1, 2, 3, 5, 8, 13, 21, 34)
Now I want to find the index i of the first element which is bigger than for example a <- 15.
I could do something like i <- which(v > a)[1].
But I want to exploit the fact that v is sorted, which I don't think which cares about.
I could write it myself and divide the interval recursively in halves and search in those partial intervals...
Is there any built-in solution? As usual the main issue is speed and my own function would be slower surely.
Thank you.
For speed-glutton
a <- 10
v <- sort(runif(1e7,0,1000));
Rcpp::cppFunction('int min_index(NumericVector v, double a) {
NumericVector::iterator low=std::lower_bound (v.begin(), v.end(), a);
return (low - v.begin());
}')
microbenchmark::microbenchmark(which(v > a)[1], min_index(v, a), unit="relative")
#Unit: relative
# expr min lq mean median uq max neval
#which(v > a)[1] 61299.15 67211.58 14346.42 8797.526 8683.39 11163.27 100
#min_index(v, a) 1.00 1.00 1.00 1.000 1.00 1.00 100
There is uniroot. It is using bisection and is faster on much longer vectors.
v <- c(1,1,2,3,5,8,13,21,34)
a <- 15
root <- uniroot(f = function(x) v[x] - a, interval = c(1, length(v)))
my_index <- floor(root$root)
Just wonder if the following may be useful.
Filter(function(x) x > 15, v)[1]
#[1] 21
Find(function(x) x > 15, v, right = FALSE, nomatch = NULL)
#[1] 21
Position(function(x) x > 15, v, right = FALSE, nomatch = NA_integer_)
#[1] 8
which isn't exactly slow, so what about min(which()):
v <- c(1,1,2,3,5,8,13,21,34)
system.time(
print(min(which(v > 5)))
)
# [1] 6
# user system elapsed
0 0 0

Resources