Implementation of skyline query or efficient frontier - r

I know there must be an easy answer to this but somehow I can't seem to find it...
I have a data frame with 2 numeric columns.
I would like to remove from it, the rows, which have the property, that there exists at least one other row in the data frame, with both column values bigger than the ones in this row.
So if I have
Col1 Col2
1 2 3
2 4 7
3 5 6
I would like to remove the first row, because the second one fulfills the property and keep only rows 2 and 3.
Thanks a lot!

That problem is called a "skyline query" by database administrators (they may have other algorithms) and an "efficient frontier" by economists.
Plotting the data can make it clear what we are looking for.
n <- 40
d <- data.frame(
x = rnorm(n),
y = rnorm(n)
)
# We want the "extreme" points in the following plot
par(mar=c(1,1,1,1))
plot(d, axes=FALSE, xlab="", ylab="")
for(i in 1:n) {
polygon( c(-10,d$x[i],d$x[i],-10), c(-10,-10,d$y[i],d$y[i]),
col=rgb(.9,.9,.9,.2))
}
The algorithm is as follows: sort the points along the first coordinate,
keep each observation unless it is worse than the last retained one.
d <- d[ order(d$x, decreasing=TRUE), ]
result <- d[1,]
for(i in seq_len(nrow(d))[-1] ) {
if( d$y[i] > result$y[nrow(result)] ) {
result <- rbind(result, d[i,]) # inefficient
}
}
points(result, cex=3, pch=15)

Edit (2015-03-02): For a more efficient solution, please see Patrick Roocks' rPref, a package for "Database Preferences and Skyline Computation", (also linked to in his answer below). To show that it finds the same solution as my code here, I've appended an example using it to my original answer here.
Riffing off of Vincent Zoonekynd's enlightening response, here's an algorithm that's fully vectorized, and likely more efficient:
set.seed(100)
d <- data.frame(x = rnorm(100), y = rnorm(100))
D <- d[order(d$x, d$y, decreasing=TRUE), ]
res <- D[which(!duplicated(cummax(D$y))), ]
# x y
# 64 2.5819589 0.7946803
# 20 2.3102968 1.6151907
# 95 -0.5302965 1.8952759
# 80 -2.0744048 2.1686003
# And then, if you would prefer the rows to be in
# their original order, just do:
d[sort(as.numeric(rownames(res))), ]
# x y
# 20 2.3102968 1.6151907
# 64 2.5819589 0.7946803
# 80 -2.0744048 2.1686003
# 95 -0.5302965 1.8952759
Or, using the rPref package:
library(rPref)
psel(d, high(x) | high(y))
# x y
# 20 2.3102968 1.6151907
# 64 2.5819589 0.7946803
# 80 -2.0744048 2.1686003
# 95 -0.5302965 1.8952759

Here is an sqldf solution where DF is the data frame of data:
library(sqldf)
sqldf("select * from DF a
where not exists (
select * from DF b
where b.Col1 >= a.Col1 and b.Col2 > a.Col2
or b.Col1 > a.Col1 and b.Col2 >= a.Col2
)"
)

This question is pretty old, but meanwhile there is a new solution. I hope it is ok to do some self-promotion here: I developed a package rPref which does an efficient Skyline computation due to C++ algorithms. With installed rPref package the query from the question can be done via (assuming that df is the name of data set):
library(rPref)
psel(df, high(Col1) | high(Col2))
This removes only those tuples, where some other tuple is better in both dimensions.
If one requires the other tuple to be strictly better in just one dimension (and better or equal in the other dimension), use high(Col1) * high(Col2) instead.

In one line:
d <- matrix(c(2, 3, 4, 7, 5, 6), nrow=3, byrow=TRUE)
d[!apply(d,1,max)<max(apply(d,1,min)),]
[,1] [,2]
[1,] 4 7
[2,] 5 6
Edit: In light of your precision in jbaums' response, here's how to check for both columns separately.
d <- matrix(c(2, 3, 3, 7, 5, 6, 4, 8), nrow=4, byrow=TRUE)
d[apply(d,1,min)>min(apply(d,1,max)) ,]
[,1] [,2]
[1,] 5 6
[2,] 4 8

d <- matrix(c(2, 3, 4, 7, 5, 6), nrow=3, byrow=TRUE)
d2 <- sapply(d[, 1], function(x) x < d[, 1]) &
sapply(d[, 2], function(x) x < d[, 2])
d2 <- apply(d2, 2, any)
result <- d[!d2, ]

Related

Calculate euclidean distance with R

I have data where rows are points and columns are coordinates x,y,z.
I'd like to calculate euclidean distance between points in couple, as 3-4, 11-12, 18-19 and so on... for example, I dont' need distance between 3 and 11, 12, 18
The problem is that I have to analize 1074 tables with 1000 rows or more, so I'm searching a way to do it automatically, maybe considering tha fact that I want to calculate distance between an odd number and the even following one. I don't care too much about the output format, but pls consider that after I have to select only distances <3.2, so a dataframe format will be great.
THANK YOU! :*
How about something like this:
First, I'll make some fake data
set.seed(4304)
df <- data.frame(
x = runif(1000, -1, 1),
y = runif(1000, -1, 1),
z = runif(1000, -1,1)
)
Make a sequence of values from 1 to the number of rows of your dataset by 2s.
s <- seq(1, nrow(df), by=2)
Use sapply() to make the distance between each pair of points.
out <- sapply(s, function(i){
sqrt(sum((df[i,] - df[(i+1), ])^2))
})
Organize the distances into a data frame
res <- data.frame(
pair = paste(rownames(df)[s], rownames(df)[(s+1)], sep="-"),
dist=out)
head(res)
# pair dist
# 1 1-2 1.379992
# 2 3-4 1.303511
# 3 5-6 1.242302
# 4 7-8 1.257228
# 5 9-10 1.107484
# 6 11-12 1.392247
Here is a function that can be applied to a data.frame or matrix holding the data.
DistEucl <- function(X){
i <- cumsum(seq_len(nrow(X)) %% 2 == 1)
sapply(split(X, i), function(Y){
sqrt(sum((Y[1, ] - Y[2, ])^2))
})
}
DistEucl(df1)
# 1 2 3 4
#1.229293 1.234273 1.245567 1.195319
With the data in DaveArmstrong's answer, the results are the same except for a names attribute in the above function's return value.
out2 <- DistEucl(df)
all.equal(out, out2)
#[1] "names for current but not for target"
identical(out, unname(out2))
#[1] TRUE
Data in the question
x <- c(13.457, 13.723, 15.319, 15.713, 18.446, 19.488, 19.762, 19.743)
y <- c(28.513, 29.656, 28.510, 27.342, 28.827, 28.24, 29.841, 30.942)
z <- c(40.513, 40.147, 43.281, 43.218, 43.095, 43.443, 40.094, 40.559)
df1 <- data.frame(x, y, z)

Can you implement 'sweep' using apply in R?

I'm brushing up on my R skills and finally feel like I've mastered the strange sweep function e.g.
df <- data.frame(a = 1:3, b = 2:4)
sweep(df, MARGIN = 2, STATS = c(5, 10), FUN = "*")
## a b
## 1 5 20
## 2 10 30
## 3 15 40
and more usefully here, on a tutorial I'm working on implementing a spatial interaction model in R.
They say that a sign you understand something is that you can say it in many ways, and I think this applies more in programming than almost anywhere else. Yet, despite the problem that sweep solves seeming apply-esque, I have NO IDEA whether they are to some degree interchangeable.
So, in order to improve my own understanding of R, is there any way to do the above procedure using apply?
This is close:
t(apply(df, 1, `*`, c(5,10)))
The row names are lost but otherwise the output is the same
> t(apply(df, 1, '*', c(5,10)))
a b
[1,] 5 20
[2,] 10 30
[3,] 15 40
To break this down, say we were doing this by hand for the first row of df, we'd write
> df[1, ] * c(5, 10)
a b
1 5 20
which is the same as calling the '*'() function with arguments df[1, ] and c(5, 10)
> '*'(df[1, ], c(5, 10))
a b
1 5 20
From this, we have enough to set up an apply() call:
we work by rows, hence MARGIN = 1,
we apply the function '*'() so FUN = '*'
we need to supply the second argument, c(5,10), to '*'(), which we do via the ... argument of apply().
The only extra thing to realise is how apply() sticks together the vector resulting from each "iteration"; here they are bound column-wise and hence we need to transpose the result from apply() so that we get the same output as sweep().
As an additional information, since questions about sweep are recurring, quick benchmarking gives (on Intel i7-8700 with Windows)
x <- matrix(data = 20000*5000, nrow = 20000, ncol = 5000)
system.time(expr = {
aa <- colMeans(x = x)
bb <- sweep(x = x, MARGIN = 2, STATS = aa, FUN = "-")
})
# user system elapsed
# 4.69 0.16 4.84
system.time(expr = {
bbb <- apply(X = x, MARGIN = 1, FUN = function(z) z - mean(x = z))
bbb <- t(x = bbb)
})
# user system elapsed
# 6.28 0.55 6.85
Meaning that sweep is more efficient when applicable.

Find the lower points of the two data columns and compare [duplicate]

I'm looking for a computationally efficient way to find local maxima/minima for a large list of numbers in R.
Hopefully without for loops...
For example, if I have a datafile like 1 2 3 2 1 1 2 1, I want the function to return 3 and 7, which are the positions of the local maxima.
diff(diff(x)) (or diff(x,differences=2): thanks to #ZheyuanLi) essentially computes the discrete analogue of the second derivative, so should be negative at local maxima. The +1 below takes care of the fact that the result of diff is shorter than the input vector.
edit: added #Tommy's correction for cases where delta-x is not 1...
tt <- c(1,2,3,2,1, 1, 2, 1)
which(diff(sign(diff(tt)))==-2)+1
My suggestion above ( http://statweb.stanford.edu/~tibs/PPC/Rdist/ ) is intended for the case where the data are noisier.
#Ben's solution is pretty sweet. It doesn't handle the follwing cases though:
# all these return numeric(0):
x <- c(1,2,9,9,2,1,1,5,5,1) # duplicated points at maxima
which(diff(sign(diff(x)))==-2)+1
x <- c(2,2,9,9,2,1,1,5,5,1) # duplicated points at start
which(diff(sign(diff(x)))==-2)+1
x <- c(3,2,9,9,2,1,1,5,5,1) # start is maxima
which(diff(sign(diff(x)))==-2)+1
Here's a more robust (and slower, uglier) version:
localMaxima <- function(x) {
# Use -Inf instead if x is numeric (non-integer)
y <- diff(c(-.Machine$integer.max, x)) > 0L
rle(y)$lengths
y <- cumsum(rle(y)$lengths)
y <- y[seq.int(1L, length(y), 2L)]
if (x[[1]] == x[[2]]) {
y <- y[-1]
}
y
}
x <- c(1,2,9,9,2,1,1,5,5,1)
localMaxima(x) # 3, 8
x <- c(2,2,9,9,2,1,1,5,5,1)
localMaxima(x) # 3, 8
x <- c(3,2,9,9,2,1,1,5,5,1)
localMaxima(x) # 1, 3, 8
Use the zoo library function rollapply:
x <- c(1, 2, 3, 2, 1, 1, 2, 1)
library(zoo)
xz <- as.zoo(x)
rollapply(xz, 3, function(x) which.min(x)==2)
# 2 3 4 5 6 7
#FALSE FALSE FALSE TRUE FALSE FALSE
rollapply(xz, 3, function(x) which.max(x)==2)
# 2 3 4 5 6 7
#FALSE TRUE FALSE FALSE FALSE TRUE
Then pull the index using the 'coredata' for those values where 'which.max' is a "center value" signaling a local maximum. You could obviously do the same for local minima using which.min instead of which.max.
rxz <- rollapply(xz, 3, function(x) which.max(x)==2)
index(rxz)[coredata(rxz)]
#[1] 3 7
I am assuming you do not want the starting or ending values, but if you do , you could pad the ends of your vectors before processing, rather like telomeres do on chromosomes.
(I'm noting the ppc package ("Peak Probability Contrasts" for doing mass spectrometry analyses, simply because I was unaware of its availability until reading #BenBolker's comment above, and I think adding these few words will increase the chances that someone with a mass-spec interest will see this on a search.)
I took a stab at this today. I know you said hopefully without for loops but I stuck with using the apply function. Somewhat compact and fast and allows threshold specification so you can go greater than 1.
The function:
inflect <- function(x, threshold = 1){
up <- sapply(1:threshold, function(n) c(x[-(seq(n))], rep(NA, n)))
down <- sapply(-1:-threshold, function(n) c(rep(NA,abs(n)), x[-seq(length(x), length(x) - abs(n) + 1)]))
a <- cbind(x,up,down)
list(minima = which(apply(a, 1, min) == a[,1]), maxima = which(apply(a, 1, max) == a[,1]))
}
To a visualize it/play with thresholds you can run the following code:
# Pick a desired threshold # to plot up to
n <- 2
# Generate Data
randomwalk <- 100 + cumsum(rnorm(50, 0.2, 1)) # climbs upwards most of the time
bottoms <- lapply(1:n, function(x) inflect(randomwalk, threshold = x)$minima)
tops <- lapply(1:n, function(x) inflect(randomwalk, threshold = x)$maxima)
# Color functions
cf.1 <- grDevices::colorRampPalette(c("pink","red"))
cf.2 <- grDevices::colorRampPalette(c("cyan","blue"))
plot(randomwalk, type = 'l', main = "Minima & Maxima\nVariable Thresholds")
for(i in 1:n){
points(bottoms[[i]], randomwalk[bottoms[[i]]], pch = 16, col = cf.1(n)[i], cex = i/1.5)
}
for(i in 1:n){
points(tops[[i]], randomwalk[tops[[i]]], pch = 16, col = cf.2(n)[i], cex = i/1.5)
}
legend("topleft", legend = c("Minima",1:n,"Maxima",1:n),
pch = rep(c(NA, rep(16,n)), 2), col = c(1, cf.1(n),1, cf.2(n)),
pt.cex = c(rep(c(1, c(1:n) / 1.5), 2)), cex = .75, ncol = 2)
There are some good solutions provided, but it depends on what you need.
Just diff(tt) returns the differences.
You want to detect when you go from increasing values to decreasing values. One way to do this is provided by #Ben:
diff(sign(diff(tt)))==-2
The problem here is that this will only detect changes that go immediately from strictly increasing to strictly decreasing.
A slight change will allow for repeated values at the peak (returning TRUE for last occurence of the peak value):
diff(diff(x)>=0)<0
Then, you simply need to properly pad the front and back if you want to detect maxima at the beginning or end of
Here's everything wrapped in a function (including finding of valleys):
which.peaks <- function(x,partial=TRUE,decreasing=FALSE){
if (decreasing){
if (partial){
which(diff(c(FALSE,diff(x)>0,TRUE))>0)
}else {
which(diff(diff(x)>0)>0)+1
}
}else {
if (partial){
which(diff(c(TRUE,diff(x)>=0,FALSE))<0)
}else {
which(diff(diff(x)>=0)<0)+1
}
}
}
Late to the party, but this might be of interest for others. You can nowadays use the (internal) function find_peaks from ggpmisc package. You can parametrize it using threshold, span and strict arguments. Since ggpmisc package is aimed for using with ggplot2 you can directly plot minima and maxima using thestat_peaks and stat_valleys functions:
set.seed(1)
x <- 1:10
y <- runif(10)
# Maxima
x[ggpmisc:::find_peaks(y)]
[1] 4 7
y[ggpmisc:::find_peaks(y)]
[1] 0.9082078 0.9446753
# Minima
x[ggpmisc:::find_peaks(-y)]
[1] 5
y[ggpmisc:::find_peaks(-y)]
[1] 0.2016819
# Plot
ggplot(data = data.frame(x, y), aes(x = x, y = y)) + geom_line() + stat_peaks(col = "red") + stat_valleys(col = "green")
Answer by #42- is great, but I had a use case where I didn't want to use zoo. It's easy to implement this with dplyr using lag and lead:
library(dplyr)
test = data_frame(x = sample(1:10, 20, replace = TRUE))
mutate(test, local.minima = if_else(lag(x) > x & lead(x) > x, TRUE, FALSE)
Like the rollapply solution, you can control the window size and edge cases through the lag/lead arguments n and default, respectively.
In the case I'm working on, duplicates are frequent. So I have implemented a function that allows finding first or last extrema (min or max):
locate_xtrem <- function (x, last = FALSE)
{
# use rle to deal with duplicates
x_rle <- rle(x)
# force the first value to be identified as an extrema
first_value <- x_rle$values[1] - x_rle$values[2]
# differentiate the series, keep only the sign, and use 'rle' function to
# locate increase or decrease concerning multiple successive values.
# The result values is a series of (only) -1 and 1.
#
# ! NOTE: with this method, last value will be considered as an extrema
diff_sign_rle <- c(first_value, diff(x_rle$values)) %>% sign() %>% rle()
# this vector will be used to get the initial positions
diff_idx <- cumsum(diff_sign_rle$lengths)
# find min and max
diff_min <- diff_idx[diff_sign_rle$values < 0]
diff_max <- diff_idx[diff_sign_rle$values > 0]
# get the min and max indexes in the original series
x_idx <- cumsum(x_rle$lengths)
if (last) {
min <- x_idx[diff_min]
max <- x_idx[diff_max]
} else {
min <- x_idx[diff_min] - x_rle$lengths[diff_min] + 1
max <- x_idx[diff_max] - x_rle$lengths[diff_max] + 1
}
# just get number of occurences
min_nb <- x_rle$lengths[diff_min]
max_nb <- x_rle$lengths[diff_max]
# format the result as a tibble
bind_rows(
tibble(Idx = min, Values = x[min], NB = min_nb, Status = "min"),
tibble(Idx = max, Values = x[max], NB = max_nb, Status = "max")) %>%
arrange(.data$Idx) %>%
mutate(Last = last) %>%
mutate_at(vars(.data$Idx, .data$NB), as.integer)
}
The answer to the original question is:
> x <- c(1, 2, 3, 2, 1, 1, 2, 1)
> locate_xtrem(x)
# A tibble: 5 x 5
Idx Values NB Status Last
<int> <dbl> <int> <chr> <lgl>
1 1 1 1 min FALSE
2 3 3 1 max FALSE
3 5 1 2 min FALSE
4 7 2 1 max FALSE
5 8 1 1 min FALSE
The result indicates that the second minimum is equal to 1 and that this value is repeated twice starting at index 5. Therefore, a different result could be obtained by indicating this time to the function to find the last occurrences of local extremas:
> locate_xtrem(x, last = TRUE)
# A tibble: 5 x 5
Idx Values NB Status Last
<int> <dbl> <int> <chr> <lgl>
1 1 1 1 min TRUE
2 3 3 1 max TRUE
3 6 1 2 min TRUE
4 7 2 1 max TRUE
5 8 1 1 min TRUE
Depending on the objective, it is then possible to switch between the first and the last value of a local extremas. The second result with last = TRUE could also be obtained from an operation between columns "Idx" and "NB"...
Finally to deal with noise in the data, a function could be implemented to remove fluctuations below a given threshold. Code is not exposed since it goes beyond the initial question. I have wrapped it in a package (mainly to automate the testing process) and I give below a result example:
x_series %>% xtrem::locate_xtrem()
x_series %>% xtrem::locate_xtrem() %>% remove_noise()
Here's the solution for minima:
#Ben's solution
x <- c(1,2,3,2,1,2,1)
which(diff(sign(diff(x)))==+2)+1 # 5
Please regard the cases at Tommy's post!
#Tommy's solution:
localMinima <- function(x) {
# Use -Inf instead if x is numeric (non-integer)
y <- diff(c(.Machine$integer.max, x)) > 0L
rle(y)$lengths
y <- cumsum(rle(y)$lengths)
y <- y[seq.int(1L, length(y), 2L)]
if (x[[1]] == x[[2]]) {
y <- y[-1]
}
y
}
x <- c(1,2,9,9,2,1,1,5,5,1)
localMinima(x) # 1, 7, 10
x <- c(2,2,9,9,2,1,1,5,5,1)
localMinima(x) # 7, 10
x <- c(3,2,9,9,2,1,1,5,5,1)
localMinima(x) # 2, 7, 10
Please regard: Neither localMaxima nor localMinima can handle duplicated maxima/minima at start!
I had some trouble getting the locations to work in previous solutions and came up with a way to grab the minima and maxima directly. The code below will do this and will plot it, marking the minima in green and the maxima in red. Unlike the which.max() function this will pull all indices of the minima/maxima out of a data frame. The zero value is added in the first diff() function to account for the missing decreased length of the result that occurs whenever you use the function. Inserting this into the innermost diff() function call saves from having to add an offset outside of the logical expression. It doesn't matter much, but i feel it's a cleaner way to do it.
# create example data called stockData
stockData = data.frame(x = 1:30, y=rnorm(30,7))
# get the location of the minima/maxima. note the added zero offsets
# the location to get the correct indices
min_indexes = which(diff( sign(diff( c(0,stockData$y)))) == 2)
max_indexes = which(diff( sign(diff( c(0,stockData$y)))) == -2)
# get the actual values where the minima/maxima are located
min_locs = stockData[min_indexes,]
max_locs = stockData[max_indexes,]
# plot the data and mark minima with red and maxima with green
plot(stockData$y, type="l")
points( min_locs, col="red", pch=19, cex=1 )
points( max_locs, col="green", pch=19, cex=1 )
This function by Timothée Poisot is handy for noisy series:
May 3, 2009
An Algorithm To Find Local Extrema In A Vector
Filed under: Algorithm — Tags: Extrema, Time series — Timothée Poisot # 6:46pm
I spend some time looking for an algorithm to find local extrema in
a vector (time series). The solution I used is to “walk” through the
vector by step larger than 1, in order to retain only one value even
when the values are very noisy (see the picture at the end of the
post).
It goes like this :
findpeaks <- function(vec,bw=1,x.coo=c(1:length(vec)))
{
pos.x.max <- NULL
pos.y.max <- NULL
pos.x.min <- NULL
pos.y.min <- NULL for(i in 1:(length(vec)-1)) { if((i+1+bw)>length(vec)){
sup.stop <- length(vec)}else{sup.stop <- i+1+bw
}
if((i-bw)<1){inf.stop <- 1}else{inf.stop <- i-bw}
subset.sup <- vec[(i+1):sup.stop]
subset.inf <- vec[inf.stop:(i-1)]
is.max <- sum(subset.inf > vec[i]) == 0
is.nomin <- sum(subset.sup > vec[i]) == 0
no.max <- sum(subset.inf > vec[i]) == length(subset.inf)
no.nomin <- sum(subset.sup > vec[i]) == length(subset.sup)
if(is.max & is.nomin){
pos.x.max <- c(pos.x.max,x.coo[i])
pos.y.max <- c(pos.y.max,vec[i])
}
if(no.max & no.nomin){
pos.x.min <- c(pos.x.min,x.coo[i])
pos.y.min <- c(pos.y.min,vec[i])
}
}
return(list(pos.x.max,pos.y.max,pos.x.min,pos.y.min))
}
Link to original blog post
In the pracma package, use the
tt <- c(1,2,3,2,1, 1, 2, 1)
tt_peaks <- findpeaks(tt, zero = "0", peakpat = NULL,
minpeakheight = -Inf, minpeakdistance = 1, threshold = 0, npeaks = 0, sortstr = FALSE)
[,1] [,2] [,3] [,4]
[1,] 3 3 1 5
[2,] 2 7 6 8
That returns a matrix with 4 columns.
The first column is showing the local peaks' absolute values.
The 2nd column are the indices
The 3rd and 4th column are the start and end of the peaks (with potential overlap).
See https://www.rdocumentation.org/packages/pracma/versions/1.9.9/topics/findpeaks for details.
One caveat: I used it in a series of non-integers, and the peak was one index too late (for all peaks) and I do not know why. So I had to manually remove "1" from my index vector (no big deal).
Finding local maxima and minima for a not so easy sequence e.g. 1 0 1 1 2 0 1 1 0 1 1 1 0 1 I would give their positions at (1), 5, 7.5, 11 and (14) for maxima and 2, 6, 9, 13 for minima.
#Position 1 1 1 1 1
# 1 2 3 4 5 6 7 8 9 0 1 2 3 4
x <- c(1,0,1,1,2,0,1,1,0,1,1,1,0,1) #Frequency
# p v p v p v p v p p..Peak, v..Valey
peakPosition <- function(x, inclBorders=TRUE) {
if(inclBorders) {y <- c(min(x), x, min(x))
} else {y <- c(x[1], x)}
y <- data.frame(x=sign(diff(y)), i=1:(length(y)-1))
y <- y[y$x!=0,]
idx <- diff(y$x)<0
(y$i[c(idx,F)] + y$i[c(F,idx)] - 1)/2
}
#Find Peaks
peakPosition(x)
#1.0 5.0 7.5 11.0 14.0
#Find Valeys
peakPosition(-x)
#2 6 9 13
peakPosition(c(1,2,3,2,1,1,2,1)) #3 7
We see many nice functions and ideas with different features here. One issue of almost all examples is the efficiency. Many times we see the use of complex functions like diff() or for()-loops, which become slow when large data sets are involved. Let me introduce an efficient function I use every day, with minimal features, but very fast:
Local Maxima Function amax()
The purpose is to detect all local maxima in a real valued vector.
If the first element x[1] is the global maximum, it is ignored,
because there is no information about the previous emlement. If there
is a plateau, the first edge is detected.
#param x numeric vector
#return returns the indicies of local maxima. If x[1] = max, then
it is ignored.
amax <- function(x)
{
a1 <- c(0,x,0)
a2 <- c(x,0,0)
a3 <- c(0,0,x)
e <- which((a1 >= a2 & a1 > a3)[2:(length(x))])
if(!is.na(e[1] == 1))
if(e[1]==1)
e <- e[-1]
if(length(e) == 0) e <- NaN
return (e)
}
a <- c(1,2,3,2,1,5,5,4)
amax(a) # 3, 6
I posted this elsewhere, but I think this is an interesting way to go about it. I'm not sure what its computational efficiency is, but it's a very concise way of solving the problem.
vals=rbinom(1000,20,0.5)
text=paste0(substr(format(diff(vals),scientific=TRUE),1,1),collapse="")
sort(na.omit(c(gregexpr('[ ]-',text)[[1]]+1,ifelse(grepl('^-',text),1,NA),
ifelse(grepl('[^-]$',text),length(vals),NA))))
An enhancement (fast and simple method) to the formula proposed by #BEN and regarding to the cases proposed by #TOMMY:
the following recursive formula handle any cases:
dx=c(0,sign(diff(x)))
numberofzeros= length(dx) - sum(abs(dx)) -1 # to find the number of zeros
# in the dx minus the first one
# which is added intentionally.
#running recursive formula to clear middle zeros
# iterate for the number of zeros
for (i in 1:numberofzeros){
dx = sign(2*dx + c(0,rev(sign(diff(rev(dx))))))
}
Now, the formula provided by #Ben Bolker can be used with a little change:
plot(x)
points(which(diff(dx)==2),x[which(diff(dx)==2)],col = 'blue')#Local MIN.
points(which(diff(dx)==-2),x[which(diff(dx)==-2)],col = 'red')#Local MAX.
I liked #mikeck's solution so that I wouldn't have to convert my dataframes back and forth from a zoo object. But I also wanted to use a window wider than 1. Their solution only looks at the xth value away from the value of interest, not the values within x distance. Here is what I came up with. You would need to add an extra lag/lead line for every value away from the value of interest that you want to look.
x <- data.frame(AIC = c(98, 97, 96, 97, 98, 99, 98, 98, 97, 96, 95, 94, 93, 92, 93, 94, 95, 96, 95, 94, 93, 92, 91, 90, 89, 88))
x <- x %>%
mutate(local.minima = if_else(lag(AIC) > AIC & lead(AIC) > AIC &
lag(AIC, 2) > AIC & lead(AIC, 2) > AIC &
lag(AIC, 3) > AIC & lead(AIC, 3) > AIC, TRUE, FALSE),
local.minima = if_else(is.na(local.minima), TRUE, local.minima))

subset data.frame union multiple intervals

Consider the following data.frame,
d <- data.frame(x = seq(0, 10, length=100), value = rnorm(100))
I wish to subset based on x belonging to any of the following intervals,
intervals <- list(c(0.2, 0.8), c(1, 2), c(8, 8.2))
test <- function(range, x){
which(x >= range[1] & x <= range[2])
}
d[Reduce(`union`, lapply(intervals, test, x=d$x)), ]
Now, the testing function seems redundant, as it looks an awful lot like the built-in findInterval, but I cannot find an elegant way to use it.
condition <- Reduce(`|`, lapply(lapply(intervals, findInterval,
x=d$x, all.inside=FALSE), `==`, 1))
d[condition, ]
Can you suggest better?
d[unlist(sapply(intervals, function(x) which(!is.na(cut(d$x,x))))),]
x value
3 0.2020202 0.15488314
4 0.3030303 -0.06891842
5 0.4040404 1.59909655
6 0.5050505 0.31006866
7 0.6060606 1.68986821
8 0.7070707 0.18500635
11 1.0101010 0.18721091
12 1.1111111 0.32485063
13 1.2121212 -0.42728405
14 1.3131313 0.84220081
15 1.4141414 -1.30745237
16 1.5151515 -1.90335389
17 1.6161616 -0.47139683
18 1.7171717 0.01622827
19 1.8181818 0.76362918
20 1.9191919 -0.37827765
81 8.0808081 0.46672521
82 8.1818182 1.27038641
Edit: The same result using findInterval
d[findInterval(d$x,unlist(intervals))%%2==1,]
Here is a solution with the intervals package.
d <- data.frame(x = seq(0, 10, length=100), value = rnorm(100))
intervals <- list(c(0.2, 0.8), c(1, 2), c(8, 8.2))
library(intervals)
intervals <- Intervals( do.call( rbind, intervals ) )
intervals <- reduce( intervals ) # Simplify, if they overlap
condition <- distance_to_nearest(d$x, intervals) == 0
# The following would allow for non-closed intervals,
# but it is awfully slow.
condition <- sapply( d$x, function(u)
any(!empty(interval_intersection( Intervals(c(u,u)), intervals ))))
d[condition,]
With findInterval, it may be trickier,
because it assumes the intervals are closed on one side and open on the other.
If this is acceptable, if the intervals are ordered and do not overlap,
you just need to check if the interval number is odd.
intervals <- list(c(0.2, 0.8), c(1, 2), c(8, 8.2))
condition <- findInterval( d$x, unlist(intervals) ) %% 2 == 1
d[condition,]

extract unique rows with a condition in r

I have this kind of data:
x <- matrix(c(2,2,3,3,3,4,4,20,33,2,3,45,6,9,45,454,7,4,6,7,5), nrow = 7, ncol = 3)
In the real dataset, I have a huge matrix with a lot of columns.
I want to extract unique rows with respect to the first column(Id) and minimum of the third column. For instance, for this matrix I would expect
y <- matrix(c(2,3,4,20,3,9,45,4,5), nrow = 3, ncol = 3)
I tried a lot of things but I couldn't figure out.
Any help is appreciated.
Thanks in advance,
Zeray
Here's a version that is more complicated, but somewhat faster that Chase's ddply solution - some 200x faster :-)
uniqueMin <- function(m, idCol = 1L, minCol = ncol(m)) {
t(vapply(split(1:nrow(m), m[,idCol]), function(i, x, minCol) x[i, , drop=FALSE][which.min(x[i,minCol]),], m[1,], x=m, minCol=minCol))
}
And the following test code:
nRows <- 10000
nCols <- 100
ids <- nRows/5
m <- cbind(sample(ids, nRows, T), matrix(runif(nRows*nCols), nRows))
system.time( a<-uniqueMin(m, minCol=3L) ) # 0.07
system.time(ddply(as.data.frame(m), "V1", function(x) x[which.min(x$V3) ,])) # 15.72
You can use package plyr. Convert to a data.frame so you can group on the first column, then use which.min to extract the min row by group:
library(plyr)
ddply(as.data.frame(x), "V1", function(x) x[which.min(x$V3) ,])
V1 V2 V3
1 2 20 45
2 3 3 4
3 4 9 5

Resources