A machine learning model predicted probability p using input x. It is unknown how model calculates the probability.
In the example below,
We have 100 xand p values.
Can someone please show an algorithm to find all values of x for which p is 0.5.
There are two challenges
I don't know the function p = f(x). I don't wish to fit some smooth polynomial curves which will remove the noise. The noises are important.
x values are discrete. So, we need to interpolate to find the desired values of x.
library(tidyverse)
x <- c(0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9,1.0,1.1,1.2,1.3,1.4,1.5,1.6,1.7,1.8,1.9,2.0, 2.1,2.2,2.3,2.4,2.5,2.6,2.7,2.8,2.9,3.0,3.1,3.2,3.3,3.4,3.5,3.6,3.7,3.8,3.9,4.0,4.1, 4.2,4.3,4.4,4.5,4.6,4.7,4.8,4.9,5.0,5.1,5.2,5.3,5.4,5.5,5.6,5.7,5.8,5.9,6.0,6.1,6.2, 6.3,6.4,6.5,6.6,6.7,6.8,6.9,7.0,7.1,7.2,7.3,7.4,7.5,7.6,7.7,7.8,7.9,8.0,8.1,8.2,8.3, 8.4,8.5,8.6,8.7,8.8,8.9,9.0,9.1,9.2,9.3,9.4,9.5,9.6,9.7,9.8,9.9, 10.0)
p <- c(0.69385203,0.67153592,0.64868391,0.72205029,0.64917218,0.66818861,0.55532616,0.58631660,0.65013198,0.53695673,0.57401464,0.57812980,0.39889101,0.41922821,0.44022287,0.48610191,0.34235438,0.30877592,0.20408235,0.17221558,0.23667792,0.29237938,0.10278049,0.20981142,0.08563396,0.12080935,0.03266140,0.12362265,0.11210208,0.08364931,0.04746024,0.14754152,0.09865584,0.16588175,0.16581508,0.14036209,0.20431540,0.19971309,0.23336415,0.12444293,0.14120138,0.21566896,0.18490258,0.34261082,0.38338941,0.41828079,0.34217964,0.38137610,0.41641546,0.58767796,0.45473784,0.60015956,0.63484702,0.55080768,0.60981219,0.71217369,0.60736818,0.78073246,0.68643671,0.79230105,0.76443958,0.74410139,0.63418201,0.64126278,0.63164615,0.68326471,0.68154362,0.75890922,0.72917978,0.55839943,0.55452549,0.69419777,0.64160572,0.63205751,0.60118916,0.40162340,0.38523375,0.39309260,0.47021037,0.33391614,0.22400555,0.20929558,0.20003229,0.15848124,0.11589228,0.13326047,0.11848593,0.17024106,0.11184393,0.12506915,0.07740497,0.02548386,0.07381765,0.02610759,0.13271803,0.07034573,0.02549706,0.02503864,0.11621910,0.08636754)
tbl <- tibble(x, p)
# plot for visualization
ggplot(data = tbl,
aes(x = x,
y = p)) +
geom_line() +
geom_point() +
geom_hline(yintercept = 0.5) +
theme_bw() +
theme(aspect.ratio = 0.4)
The figure below shows that there are five roots.
This question is clearer than your previous one: How I use numerical methods to calculate roots in R.
I don't know the function p = f(x)
So you don't have a predict function to calculate p for new x values. This is odd, though. Many statistical models have methods for predict. As BenBolker mentioned, the "obvious" solution is to use uniroot or more automated routines to find a or all roots, for the following template function:
function (x, model, p.target) predict(model, x) - p.target
But this does not work for you. You only have a set of (x, p) values that look noisy.
I don't wish to fit some smooth polynomial curves which will remove the noise. The noises are important.
So we need to interpolate those (x, p) values for a function p = f(x).
So, we need to interpolate to find the desired values of x.
Exactly. The question is what interpolation method to use.
The figure below shows that there are five roots.
This line chart is actually a linear interpolation, consisting of piecewise line segments. To find where it crosses a horizontal line, you can use function RootSpline1 defined in my Q & A back in 2018: get x-value given y-value: general root finding for linear / non-linear interpolation function
RootSpline1(x, p, 0.5)
#[1] 1.243590 4.948805 5.065953 5.131125 7.550705
Thank you very much. Please add the information of how to install the required package. That will help everyone.
This function is not in a package. But this is a good suggestion. I am now thinking of collecting all functions I wrote on Stack Overflow in a package.
The linked Q & A does mention an R package on GitHub: https://github.com/ZheyuanLi/SplinesUtils, but it focuses on splines of higher degree, like cubic interpolation spline, cubic smoothing spline and regression B-splines. Linear interpolation is not dealt with there. So for the moment, you need to grab function RootSpline1 from my Stack Overflow answer.
I am trying to fit a curve to a set of data points but did not succeed. So I ask you.
plot(time,val) # look at data
exponential.model <- lm(log(val)~ a) # compute model
fit <- exp(predict(exponential.model,list(Time=time))) # create the fitted curve
plot(time,val)#plot it again
lines(time, fit,lwd=2) # show the fitted line
My only problem is, that my data contains negative values and so log(val) produces a lot of NA making the model computation crash.
I know that my data does not necessarily look like exponential , but I want to see the fit anyway. I also used another program which shows me val=27.1331*exp(-time/2.88031) is a nice fit but I do not know, what I am doing wrong.
I want to compute it with R.
I had the idea to shift data so no negative values remain, but result is poor and quite sure wrong.
plot(time,val+20) # look at data
exponential.model <- lm(log(val+20)~ a) # compute model
fit <- exp(predict(exponential.model,list(Time=time))) # create the fitted curve
plot(time,val)#plot it again
lines(time, fit-20,lwd=2) # show the (BAD) fitted line
Thank you!
I figured some things out and have a satisfying solution.
exponential.model <- lm(log(val)~ a) # compute model
The log(val) term is trying to rescale the values, so a linear model can be applied. Since this not possible to my values, you have to use a non-linear model (nls).
exponential.model <- nls(val ~ a*exp(b*time), start=c(b=-0.1,h=30))
This worked fine for me.
satisfying fit
Using Greg's helpful answer here, I fit a second order polynomial regression line to my dataset:
poly.fit<-lm(y~poly(x,2),df)
When I plot the line, I get the graph below:
The coefficients are:
# Coefficients:
# (Intercept) poly(x, 2)1 poly(x, 2)2
# 727.1 362.4 -269.0
I then wanted to find the x-value of the peak. I assume there is an easy way to do so in R but I did not know it,* so I went to Wolfram Alpha. I entered the equation:
y=727.1+362.4x-269x^2
Wolfram Alpha returned the following:
As you can see, the function intersects the x-axis at approximately x=2.4. This is obviously different from my plot in R, which ranges from 0≤x≤80. Why are these different? Does R interpret my x-values as a fraction of some backroom variable?
*I would also appreciate answers on how to find this peak. Obviously I could take the derivative, but how do I set to zero?
Use predict.
plot( 40:90, predict( poly.fit, list(x=40:90) )
In the case of a quadratic polynomial, you can of course use a little calculus and algebra (once you have friendly coefficients).
Somewhat more generally, you can get an estimate by evaluating your model over a range of candidate values and determining which one gives you the maximum response value.
Here is a (only moderately robust) function which will work here.
xmax <- function(fit, startx, endx, x='x', within=NA){
## find approximate value of variable x where model
## specified by fit takes maximum value, inside interval
## [startx, endx]; precision specified by within
within <- ifelse(is.na(within), (endx - startx)/100, within)
testx <- seq(startx, endx, by=within)
testlist <- list(testx)
names(testlist)[1] <- x
testy <- predict(fit, testlist)
testx[which.max(testy)]
}
Note if your predictor variable were called something other than x, you have to specify it as a string in the x parameter.
So to find the x value where your curve has its peak:
xmax(poly.fit, 50, 80, within=0.1)
I want to turn a continuous random variable X with cdf F(x) into a continuous random variable Y with cdf F(y) and am wondering how to implement it in R.
For example, perform a probability transformation on data following normal distribution (X) to make it conform to a desirable Weibull distribution (Y).
(x=0 has CDF F(x=0)=0.5, CDF F(y)=0.5 corresponds to y=5, then x=0 corresponds to y=5 etc.)
There are many built in distribution functions, those starting with a 'p' will transform to a uniform and those starting with a 'q' will transform from a uniform. So the transform in your example can be done by:
y <- qweibull( pnorm( x ), 2, 6.0056 )
Then just change the functions and/or parameters for other cases.
The distr package may also be of interest for additional capabilities.
In general, you can transform an observation x on X to an observation y on Y by
getting the probability of X≤x, i.e. FX(x).
then determining what observation y has the same probability,
I.e. you want the probability Y≤y = FY(y) to be the same as FX(x).
This gives FY(y) = FX(x).
Therefore y = FY-1(FX(x))
where FY-1 is better known as the quantile function, QY. The overall transformation from X to Y is summarized as: Y = QY(FX(X)).
In your particular example, from the R help, the distribution functions for the normal distribution is pnorm and the quantile function for the Weibull distribution is qweibull, so you want to first of all call pnorm, then qweibull on the result.
I'm trying to fit a natural cubit spline to probabilistic data (probabilities that a random variable is smaller than certain values) to obtain a cumulative distribution function, which works well enough using splinefun():
cutoffs <- c(-90,-60,-30,0,30,60,90,120)
probs <- c(0,0,0.05,0.25,0.5,0.75,0.9,1)
CDF.spline <- splinefun(cutoffs,probs, method="natural")
plot(cutoffs,probs)
curve(CDF.spline(x), add=TRUE, col=2, n=1001)
I would then, however, like to use the density function, i.e. the derivative of the spline, to perform various calculations (e.g. to obtain the expected value of the random variable).
Is there any way of obtaining this derivative as a function rather than just evaluated at a discrete number of points via splinefun(x, deriv=1)?
This is pretty close to what I'm looking for, but alas the example doesn't seem to work in R version 2.15.0.
Barring an analytical solution, what's the cleanest numerical way of going about this?
If you change the environment assignment line for g in the code the Berwin Turlach provided on R-help to this:
environment(g) <- environment(f)
... you succeed in R 2.15.1.