R - how to eliminate upper x% of a vector - r

I need to throw out the outliers of my variable.
I want to reduce the upper 10 percent of my variable.
Yet I have no clue how to find out which are my upper 10 %.
If I make a random cut at 30 I get the upper 3.45 %.
dat$T102_01[dat$T102_01 < 30]
Is there any way to tell r not to take the values < 30 but the first 90% of the values?
Since I don´t want to make a content based decision (anything above 30 is unrealistic) it would be better to take the upper 10% of all variables I have assesed.
I would be very thankful for any comments
Sorry I can´t add a picture of my plot. The distribution is skewed and most values are between 0-30, very view values are between 30-100

I would use the quantile function as follows:
x <- rnorm(50)
p90 <- quantile(x = x,probs = .9)
want <- x[x<p90]

You can do this by doing a sort and find the value 90% of the way through it:
vec <- rnorm(1000)
cut <- sort( vec )[ round( length( vec ) * 0.9 ) ]
vec <- vec[ vec < cut ]
So we sort the vector, and take the value at the point 90% of the way through the vector as a cut point. We then use the cut point to take only the bottom 90% of the main vector.

Related

How can I create a random walk with elements of a dataframe in R?

Good evening,
I have a dataframe with 100 rows and the column headers are ID, x-coordinate and y-coordinate:
ID
X
Y
1
0.1
0.1
2
0.2
0.2
and so on. I would now like to simulate a random walk between these 100 points, but do not know how to call a row. I thought about something similar to the following:
dataframe[ID,]=dataframe[ID+1,]+sample(step,1)
However this throws the error "unused argument (alist(id = ))"
does somebody know how to fix this?
Many thanks in advance!
This will return the rows in randomized order:
set.seed(123) # pick an arbitrary number to make reproducible
dfrm[ sample(100), ]
If you just wanted to x and y values but not the IDs, it would be:
set.seed(567)
dfrm[ sample(100), 2:3 ]
Here's a plot of a result:
df_start <- data.frame(ID=1:100, X=runif(100), Y=runif(100))
dfrm <- df_start[ sample(100) , ]
plot( x=0:1,y=0:1, plot=FALSE) # just setting range
arrows(x0=head(dfrm$X, -1), x1=tail(dfrm$X,-1),
y0=head(dfrm$Y, -1), y1=tail(dfrm$Y,-1) )
You said you wanted a "random walk between these points". The other way to create a random walk which would be more Markovian would be to use the values as increments from a starting point and have the values centered at 0, perhaps spanning [-1, 1]. You would instead use the cumsum of their values after starting at (0,0)

Find local minimum in a vector with r

Taking the ideas from the following links:
the local minimum between the two peaks
How to explain ...
I look for the local minimum or minimums, avoiding the use of functions already created for this purpose [max / min locale or global].
Our progress:
#DATA
simulate <- function(lambda=0.3, mu=c(0, 4), sd=c(1, 1), n.obs=10^5) {
x1 <- rnorm(n.obs, mu[1], sd[1])
x2 <- rnorm(n.obs, mu[2], sd[2])
return(ifelse(runif(n.obs) < lambda, x1, x2))
}
data <- simulate()
hist(data)
d <- density(data)
#
#https://stackoverflow.com/a/25276661/8409550
##Since the x-values are equally spaced, we can estimate dy using diff(d$y)
d$x[which.min(abs(diff(d$y)))]
#With our data we did not obtain the expected value
#
d$x[which(diff(sign(diff(d$y)))>0)+1]#pit
d$x[which(diff(sign(diff(d$y)))<0)+1]#peak
#we check
#1
optimize(approxfun(d$x,d$y),interval=c(0,4))$minimum
optimize(approxfun(d$x,d$y),interval=c(0,4),maximum = TRUE)$maximum
#2
tp <- pastecs::turnpoints(d$y)
summary(tp)
ind <- (1:length(d$y))[extract(tp, no.tp = FALSE, peak = TRUE, pit = TRUE)]
d$x[ind[2]]
d$x[ind[1]]
d$x[ind[3]]
My questions and request for help:
Why did the command lines fail:
d$x[which.min(abs(diff(d$y)))]
It is possible to eliminate the need to add one to the index in the command lines:
d$x[which(diff(sign(diff(d$y)))>0)+1]#pit
d$x[which(diff(sign(diff(d$y)))<0)+1]#peak
How to get the optimize function to return the two expected maximum values?
Question 1
The answer to the first question is straighforward. The line d$x[which.min(abs(diff(d$y)))] asks for the x value at which there was the smallest change in y between two consecutive points. The answer is that this happened at the extreme right of the plot where the density curve is essentially flat:
which.min(abs(diff(d$y)))
#> [1] 511
length(abs(diff(d$y)))
#> [1] 511
This is not only smaller than the difference at your local maxima /minima points; it is orders of magnitude smaller. Let's zoom in to the peak value of d$y, including only the peak and the point on each side:
which.max(d$y)
#> [1] 324
plot(d$x[323:325], d$y[323:325])
We can see that the smallest difference is around 0.00005, or 5^-5, between two consecutive points. Now look at the end of the plot where it is flattest:
plot(d$x[510:512], d$y[510:512])
The difference is about 1^-7, which is why this is the flattest point.
Question 2
The answer to your second question is "no, not really". You are taking a double diff, which is two elements shorter than x, and if x is n elements long, a double diff will correspond to elements 2 to (n - 1) in x. You can remove the +1 from the index, but you will have an off-by-one error if you do that. If you really wanted to, you could concatenate dummy zeros at each stage of the diff, like this:
d$x[which(c(0, diff(sign(diff(c(d$y, 0))))) > 0)]
which gives the same result, but this is longer, harder to read and harder to justify, so why would you?
Question 3
The answer to the third question is that you could use the "pit" as the dividing point between the minimum and maximum value of d$x to find the two "peaks". If you really want a single call to get both at once, you could do it inside an sapply:
pit <- optimize(approxfun(d$x,d$y),interval=c(0,4))$minimum
peaks <- sapply(1:2, function(i) {
optimize(approxfun(d$x, d$y),
interval = c(min(d$x), pit, max(d$x))[i:(i + 1)],
maximum = TRUE)$maximum
})
pit
#> [1] 1.691798
peaks
#> [1] -0.02249845 3.99552521

How to use the sum function in a for loop in R?

We want to calculate the value of an integral in linear plot.
For a better understanding look at the photo. Let's say the overall area is 1. We want to find what the value in a certain part is. For instance we want to know how much % of the overall 100% lay within the 10th and 11th month if everything refers to months and A as maximum stands for 24.
We can calculate a integral and then should be able to get the searched area by F(x) - F(x-1)
I thoght about the following code:
a <- 24
tab <-matrix(0,a,1)
tab <-cbind(seq(1,a),tab)
tab<-data.frame(tab)
#initialization for first point
tab[1,2] <- (2*tab[1,1] / a - tab[1,1]^2 / a^2)
#for loop for calculation of integral of each point - integral until to the area
for(i in 2:nrow(tab))
{tab[i,2] <- (2*tab[i,1] / a - tab[i,1]^2/ a^2) - sum(tab[1,2]:tab[i-1,2])}
#plotting
plot(tab[,2], type="l")
If you see the plot - it's confusing. Any ideas how to handle this correct?
The base R function integrate() can do this for you:
f <- function(x, A) 2/A - x / A^2
integrate(function(x)f(x, 24), lower=10, upper=11)
0.06510417 with absolute error < 7.2e-16
Using the formulas directly:
a <- 24 # number of divisions
x <- c(seq(1,a)) #
y <- x*2/a - x^2/a^2 # F(x)
z <- (x*2/a - x^2/a^2) - ((x-1)*2/a - (x-1)^2/a^2) # F(x) - F(x-1)
Then do the binding afterward.
> sum(z)
[1] 1

R calculate possible values of two variables

I'm trying to calculate all the possible values of a grid size (x by y) that lead to the same number of cells, so for example a 2x2 grid has a cell size of 4. I want the y to be half of the x, and the total to be, for example 4000. So I guess I want R to calculate all the possible positive integer values of x and y where
function (total) {
x*y=total
x/y=2
x!=total
y!= total.
}
I suppose one way to get positive integers and to consider different solutions would be to allow the total to be up to 10% larger than its original value (but not smaller, I need the grid to be at least as big as the total value I give), in which case the function could have two fields, tot (e.g. 4000) and tolerance (e.g. 10%). Total (as used in the sketch function above) than has to be between tot and (tot+tolerance*tot)
I have several cell sizes so 4000 is only one example. I'm trying to build a quick function which returns positive integers only and returns a matrix of Xs and Ys.
Any ideas?
Many thanks
What about this:
possible.sizes <- function(total, tolerance) {
min.total <- total
max.total <- total * (1 + tolerance)
min.y <- ceiling(sqrt(min.total/2))
max.y <- floor(sqrt(max.total/2))
if (max.y < min.y)
return(data.frame(x=numeric(0), y=numeric(0)))
y <- seq(min.y, max.y)
x <- 2*y
return(data.frame(x=x, y=y))
}
possible.sizes(4000, 0.1)
# x y
# 1 90 45
# 2 92 46

Detecting dips in a 2D plot

I need to automatically detect dips in a 2D plot, like the regions marked with red circles in the figure below. I'm only interested in the "main" dips, meaning the dips have to span a minimum length in the x axis. The number of dips is unknown, i.e., different plots will contain different numbers of dips. Any ideas?
Update:
As requested, here's the sample data, together with an attempt to smooth it using median filtering, as suggested by vines.
Looks like I need now a robust way to approximate the derivative at each point that would ignore the little blips that remain in the data. Is there any standard approach?
y <- c(0.9943,0.9917,0.9879,0.9831,0.9553,0.9316,0.9208,0.9119,0.8857,0.7951,0.7605,0.8074,0.7342,0.6374,0.6035,0.5331,0.4781,0.4825,0.4825,0.4879,0.5374,0.4600,0.3668,0.3456,0.4282,0.3578,0.3630,0.3399,0.3578,0.4116,0.3762,0.3668,0.4420,0.4749,0.4556,0.4458,0.5084,0.5043,0.5043,0.5331,0.4781,0.5623,0.6604,0.5900,0.5084,0.5802,0.5802,0.6174,0.6124,0.6374,0.6827,0.6906,0.7034,0.7418,0.7817,0.8311,0.8001,0.7912,0.7912,0.7540,0.7951,0.7817,0.7644,0.7912,0.8311,0.8311,0.7912,0.7688,0.7418,0.7232,0.7147,0.6906,0.6715,0.6681,0.6374,0.6516,0.6650,0.6604,0.6124,0.6334,0.6374,0.5514,0.5514,0.5412,0.5514,0.5374,0.5473,0.4825,0.5084,0.5126,0.5229,0.5126,0.5043,0.4379,0.4781,0.4600,0.4781,0.3806,0.4078,0.3096,0.3263,0.3399,0.3184,0.2820,0.2167,0.2122,0.2080,0.2558,0.2255,0.1921,0.1766,0.1732,0.1205,0.1732,0.0723,0.0701,0.0405,0.0643,0.0771,0.1018,0.0587,0.0884,0.0884,0.1240,0.1088,0.0554,0.0607,0.0441,0.0387,0.0490,0.0478,0.0231,0.0414,0.0297,0.0701,0.0502,0.0567,0.0405,0.0363,0.0464,0.0701,0.0832,0.0991,0.1322,0.1998,0.3146,0.3146,0.3184,0.3578,0.3311,0.3184,0.4203,0.3578,0.3578,0.3578,0.4282,0.5084,0.5802,0.5667,0.5473,0.5514,0.5331,0.4749,0.4037,0.4116,0.4203,0.3184,0.4037,0.4037,0.4282,0.4513,0.4749,0.4116,0.4825,0.4918,0.4879,0.4918,0.4825,0.4245,0.4333,0.4651,0.4879,0.5412,0.5802,0.5126,0.4458,0.5374,0.4600,0.4600,0.4600,0.4600,0.3992,0.4879,0.4282,0.4333,0.3668,0.3005,0.3096,0.3847,0.3939,0.3630,0.3359,0.2292,0.2292,0.2748,0.3399,0.2963,0.2963,0.2385,0.2531,0.1805,0.2531,0.2786,0.3456,0.3399,0.3491,0.4037,0.3885,0.3806,0.2748,0.2700,0.2657,0.2963,0.2865,0.2167,0.2080,0.1844,0.2041,0.1602,0.1416,0.2041,0.1958,0.1018,0.0744,0.0677,0.0909,0.0789,0.0723,0.0660,0.1322,0.1532,0.1060,0.1018,0.1060,0.1150,0.0789,0.1266,0.0965,0.1732,0.1766,0.1766,0.1805,0.2820,0.3096,0.2602,0.2080,0.2333,0.2385,0.2385,0.2432,0.1602,0.2122,0.2385,0.2333,0.2558,0.2432,0.2292,0.2209,0.2483,0.2531,0.2432,0.2432,0.2432,0.2432,0.3053,0.3630,0.3578,0.3630,0.3668,0.3263,0.3992,0.4037,0.4556,0.4703,0.5173,0.6219,0.6412,0.7275,0.6984,0.6756,0.7079,0.7192,0.7342,0.7458,0.7501,0.7540,0.7605,0.7605,0.7342,0.7912,0.7951,0.8036,0.8074,0.8074,0.8118,0.7951,0.8118,0.8242,0.8488,0.8650,0.8488,0.8311,0.8424,0.7912,0.7951,0.8001,0.8001,0.7458,0.7192,0.6984,0.6412,0.6516,0.5900,0.5802,0.5802,0.5762,0.5623,0.5374,0.4556,0.4556,0.4333,0.3762,0.3456,0.4037,0.3311,0.3263,0.3311,0.3717,0.3762,0.3717,0.3668,0.3491,0.4203,0.4037,0.4149,0.4037,0.3992,0.4078,0.4651,0.4967,0.5229,0.5802,0.5802,0.5846,0.6293,0.6412,0.6374,0.6604,0.7317,0.7034,0.7573,0.7573,0.7573,0.7772,0.7605,0.8036,0.7951,0.7817,0.7869,0.7724,0.7869,0.7869,0.7951,0.7644,0.7912,0.7275,0.7342,0.7275,0.6984,0.7342,0.7605,0.7418,0.7418,0.7275,0.7573,0.7724,0.8118,0.8521,0.8823,0.8984,0.9119,0.9316,0.9512)
yy <- runmed(y, 41)
plot(y, type="l", ylim=c(0,1), ylab="", xlab="", lwd=0.5)
points(yy, col="blue", type="l", lwd=2)
EDITED : function strips the regions to contain nothing but the lowest part, if wanted.
Actually, Using the mean is easier than using the median. This allows you to find regions where the real values are continuously below the mean. The median is not smooth enough for an easy application.
One example function to do this would be :
FindLowRegion <- function(x,n=length(x)/4,tol=length(x)/20,p=0.5){
nx <- length(x)
n <- 2*(n %/% 2) + 1
# smooth out based on means
sx <- rowMeans(embed(c(rep(NA,n/2),x,rep(NA,n/2)),n),na.rm=T)
# find which series are far from the mean
rlesx <- rle((sx-x)>0)
# construct start and end of regions
int <- embed(cumsum(c(1,rlesx$lengths)),2)
# which regions fulfill requirements
id <- rlesx$value & rlesx$length > tol
# Cut regions to be in general smaller than median
regions <-
apply(int[id,],1,function(i){
i <- min(i):max(i)
tmp <- x[i]
id <- which(tmp < quantile(tmp,p))
id <- min(id):max(id)
i[id]
})
# return
unlist(regions)
}
where
n determines how much values are used to calculate the running mean,
tol determines how many consecutive values should be lower than the running mean to talk about a low region, and
p determines the cutoff used (as a quantile) for stripping the regions to their lowest part. When p=1, the complete lower region is shown.
Function is tweaked to work on data as you presented, but the numbers might need to be adjusted a bit to work with other data.
This function returns a set of indices, which allows you to find the low regions. Illustrated with your y vector :
Lows <- FindLowRegion(y)
newx <- seq_along(y)
newy <- ifelse(newx %in% Lows,y,NA)
plot(y, col="blue", type="l", lwd=2)
lines(newx,newy,col="red",lwd="3")
Gives :
You have to smooth the graph in some way. Median filtration is quite useful for that purpose (see http://en.wikipedia.org/wiki/Median_filter). After smoothing, you will simply have to search for the minima, just as usual (i.e. search for the points where the 1st derivative switches from negative to positive).
A simpler answer (which also does not require smoothing) could be provided by adapting the maxdrawdown() function from the tseries. A drawdown is commonly defined as the retreat from the most-recent maximum; here we want the opposite. Such a function could then be used in a sliding window over the data, or over segmented data.
maxdrawdown <- function(x) {
if(NCOL(x) > 1)
stop("x is not a vector or univariate time series")
if(any(is.na(x)))
stop("NAs in x")
cmaxx <- cummax(x)-x
mdd <- max(cmaxx)
to <- which(mdd == cmaxx)
from <- double(NROW(to))
for (i in 1:NROW(to))
from[i] <- max(which(cmaxx[1:to[i]] == 0))
return(list(maxdrawdown = mdd, from = from, to = to))
}
So instead of using cummax(), one would have to switch to cummin() etc.
My first thought was something much cruder than filtering. Why not look for the big drops followed by long enough stable periods?
span.b <- 20
threshold.b <- 0.2
dy.b <- c(rep(NA, span.b), diff(y, lag = span.b))
span.f <- 10
threshold.f <- 0.05
dy.f <- c(diff(y, lag = span.f), rep(NA, span.f))
down <- which(dy.b < -1 * threshold.b & abs(dy.f) < threshold.f)
abline(v = down)
The plot shows that it's not perfect, but it doesn't discard the outliers (I guess it depends on your take on the data).

Resources