Sampling custom probability density function in R - r
Through the dwp package, I got the probability density function which describes my original data. The function itself is given within the list, so I can see the fitted parameters by checking that particular list element:
> Kbatmod$xep02
Distribution: xep02
Formula: ncarc ~ log(r) + I(r^2) + offset(log(exposure))
Parameters:
b0 b2
-0.8396640654 -0.0004653923
Coefficients:
(Intercept) log(r) I(r^2)
-2.1154557406 -0.8396640654 -0.0004653923
Variance:
(Intercept) log(r) I(r^2)
(Intercept) 3.034712e-01 -1.111308e-01 4.550014e-05
log(r) -1.111308e-01 4.608998e-02 -2.451855e-05
I(r^2) 4.550014e-05 -2.451855e-05 2.531856e-08
Now I want to sample that function to plot the point cloud (locations around the 0,0). Direction or bearing is going to be sampled as random numbers between 0 and 360. But distances need to fit to this fitted xep02 function. I can't find the function which would do that in the dwp package, even though its manual uses this kind of density point clouds to explain how it works.
I tried the RVCompare package, and the function sampleFromDensity.
But I keep getting errors, and I believe it is because of how I'm giving it the xep02 function.
> dwpPDF <- Kbatmod$xep02
>
> PDFsamples <- sampleFromDensity(dwpPDF, 100, c(0,100))
Error in density.default(X[[i]], ...) :
need at least 2 points to select a bandwidth automatically
Can someone help me how to "translate" what dwp package gave me into input for RVCompare?
Aim is to obtain a vector with 100 distances which fit the xep02 PDF, add a vector of 100 random selected bearings, and then plot these in ArcGIS to overlay with predefined polygons and see how many points fall within these polygons.
sampleFromDensity is expecting a function as its first argument. You can accomplish this with the ddd function:
sampleFromDensity(function(x) ddd(x, dwpPDF), 100, c(0,100))
Related
Fitting experimental data points to different cumulative distributions using R
I am new to programming and using R software, so I would really appreciate your feedback to the current problem that I am trying to solve. So, I have to fit a cumulative distribution with some function (two/three parameter function). This seems to be pretty straight-forward task, but I've been buzzing around this now for some time. Let me show you what are my variables: x=c(0.01,0.011482,0.013183,0.015136,0.017378,0.019953,0.022909,0.026303,0.0302,0.034674,0.039811,0.045709,0.052481,0.060256,0.069183,0.079433,0.091201,0.104713,0.120226,0.138038,0.158489,0.18197,0.20893,0.239883,0.275423,0.316228,0.363078,0.416869,0.47863,0.549541,0.630957,0.724436,0.831764,0.954993,1.096478,1.258925,1.44544,1.659587,1.905461,2.187762,2.511886,2.884031,3.311311,3.801894,4.365158,5.011872,5.754399,6.606934,7.585776,8.709636,10,11.481536,13.182567,15.135612,17.378008,19.952623,22.908677,26.30268,30.199517,34.673685,39.810717,45.708819,52.480746,60.255959,69.183097,79.432823,91.201084,104.712855,120.226443,138.038426,158.489319,181.970086,208.929613,239.883292,275.42287,316.227766,363.078055,416.869383,478.630092,549.540874,630.957344,724.43596,831.763771,954.992586,1096.478196) y=c(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0.00044816,0.00127554,0.00221488,0.00324858,0.00438312,0.00559138,0.00686054,0.00817179,0.00950625,0.01085188,0.0122145,0.01362578,0.01514366,0.01684314,0.01880564,0.02109756,0.0237676,0.02683182,0.03030649,0.0342276,0.03874555,0.04418374,0.05119304,0.06076553,0.07437854,0.09380666,0.12115065,0.15836926,0.20712933,0.26822017,0.34131335,0.42465413,0.51503564,0.60810697,0.69886817,0.78237651,0.85461023,0.91287236,0.95616228,0.98569093,0.99869001,0.99999999,0.99999999,0.99999999,0.99999999,0.99999999,0.99999999,0.99999999,0.99999999,0.99999999,0.99999999,0.99999999,0.99999999,0.99999999) This is the plot where I set up x-axis as log: After some research, I have tried with Sigmoid function, as found on one of the posts (I can't add link since my reputation is not high enough). This is the code: # sigmoid function definition sigmoid = function(params, x) { params[1] / (1 + exp(-params[2] * (x - params[3]))) } # fitting code using nonlinear least square fitmodel <- nls(y~a/(1 + exp(-b * (x-c))), start=list(a=1,b=.5,c=25)) # get the coefficients using the coef function params=coef(fitmodel) # asigning to y2 sigmoid function y2 <- sigmoid(params,x) # plotting y2 function plot(y2,type="l") # plotting data points points(y) This led me to some good fitting results (I don't know how to quantify this). But, when I look at the at the plot of Sigmuid fitting function I don't understand why is the S shape now happening in the range of x-values from 40 until 7 (looking at the S shape should be in x-values from 10 until 200). Since I couldn't explain this behavior, I thought of trying Weibull equation for fitting, but so far I can't make the code running. To sum up: Do you have any idea why is the Sigmoid giving me that weird fitting? Do you know any better two or three parameter equation for this fitting approach? How could I determine the goodness of fit? Something like r^2?
# Data df <- data.frame(x=c(0.01,0.011482,0.013183,0.015136,0.017378,0.019953,0.022909,0.026303,0.0302,0.034674,0.039811,0.045709,0.052481,0.060256,0.069183,0.079433,0.091201,0.104713,0.120226,0.138038,0.158489,0.18197,0.20893,0.239883,0.275423,0.316228,0.363078,0.416869,0.47863,0.549541,0.630957,0.724436,0.831764,0.954993,1.096478,1.258925,1.44544,1.659587,1.905461,2.187762,2.511886,2.884031,3.311311,3.801894,4.365158,5.011872,5.754399,6.606934,7.585776,8.709636,10,11.481536,13.182567,15.135612,17.378008,19.952623,22.908677,26.30268,30.199517,34.673685,39.810717,45.708819,52.480746,60.255959,69.183097,79.432823,91.201084,104.712855,120.226443,138.038426,158.489319,181.970086,208.929613,239.883292,275.42287,316.227766,363.078055,416.869383,478.630092,549.540874,630.957344,724.43596,831.763771,954.992586,1096.478196), y=c(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0.00044816,0.00127554,0.00221488,0.00324858,0.00438312,0.00559138,0.00686054,0.00817179,0.00950625,0.01085188,0.0122145,0.01362578,0.01514366,0.01684314,0.01880564,0.02109756,0.0237676,0.02683182,0.03030649,0.0342276,0.03874555,0.04418374,0.05119304,0.06076553,0.07437854,0.09380666,0.12115065,0.15836926,0.20712933,0.26822017,0.34131335,0.42465413,0.51503564,0.60810697,0.69886817,0.78237651,0.85461023,0.91287236,0.95616228,0.98569093,0.99869001,0.99999999,0.99999999,0.99999999,0.99999999,0.99999999,0.99999999,0.99999999,0.99999999,0.99999999,0.99999999,0.99999999,0.99999999,0.99999999)) # sigmoid function definition sigmoid = function(x, a, b, c) { a * exp(-b * exp(-c * x)) } # fitting code using nonlinear least square fitmodel <- nls(y ~ sigmoid(x, a, b, c), start=list(a=1,b=.5,c=-2), data = df) # plotting y2 function plot(df$x, predict(fitmodel),type="l", log = "x") # plotting data points points(df) The function I used is the Gompertz function and this blog post explains why R² shouldn't be used with nonlinear fits and offers an alternative.
After going through different functions and different data-sets I have found the best solution that gives the answers to all of my questions posted. The code is as it follows for the data-set stated in question: df <- data.frame(x=c(0.01,0.011482,0.013183,0.015136,0.017378,0.019953,0.022909,0.026303,0.0302,0.034674,0.039811,0.045709,0.052481,0.060256,0.069183,0.079433,0.091201,0.104713,0.120226,0.138038,0.158489,0.18197,0.20893,0.239883,0.275423,0.316228,0.363078,0.416869,0.47863,0.549541,0.630957,0.724436,0.831764,0.954993,1.096478,1.258925,1.44544,1.659587,1.905461,2.187762,2.511886,2.884031,3.311311,3.801894,4.365158,5.011872,5.754399,6.606934,7.585776,8.709636,10,11.481536,13.182567,15.135612,17.378008,19.952623,22.908677,26.30268,30.199517,34.673685,39.810717,45.708819,52.480746,60.255959,69.183097,79.432823,91.201084,104.712855,120.226443,138.038426,158.489319,181.970086,208.929613,239.883292,275.42287,316.227766,363.078055,416.869383,478.630092,549.540874,630.957344,724.43596,831.763771,954.992586,1096.478196), y=c(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0.00044816,0.00127554,0.00221488,0.00324858,0.00438312,0.00559138,0.00686054,0.00817179,0.00950625,0.01085188,0.0122145,0.01362578,0.01514366,0.01684314,0.01880564,0.02109756,0.0237676,0.02683182,0.03030649,0.0342276,0.03874555,0.04418374,0.05119304,0.06076553,0.07437854,0.09380666,0.12115065,0.15836926,0.20712933,0.26822017,0.34131335,0.42465413,0.51503564,0.60810697,0.69886817,0.78237651,0.85461023,0.91287236,0.95616228,0.98569093,0.99869001,0.99999999,0.99999999,0.99999999,0.99999999,0.99999999,0.99999999,0.99999999,0.99999999,0.99999999,0.99999999,0.99999999,0.99999999,0.99999999)) library(drc) fm <- drm(y ~ x, data = df, fct = G.3()) #The Gompertz model G.3() plot(fm) #Gompertz Coefficients and residual standard error summary(fm) The plot after fitting
R: How to read Nomograms to predict the desired variable
I am using Rstudio. I have created nomograms using function nomogram from package rms using following code (copied from the example code of the documentation): library(rms) n <- 1000 # define sample size set.seed(17) # so can reproduce the results age <- rnorm(n, 50, 10) blood.pressure <- rnorm(n, 120, 15) cholesterol <- rnorm(n, 200, 25) sex <- factor(sample(c('female','male'), n,TRUE)) # Specify population model for log odds that Y=1 L <- .4*(sex=='male') + .045*(age-50) + (log(cholesterol - 10)-5.2)*(-2*(sex=='female') + 2*(sex=='male')) # Simulate binary y to have Prob(y=1) = 1/[1+exp(-L)] y <- ifelse(runif(n) < plogis(L), 1, 0) ddist <- datadist(age, blood.pressure, cholesterol, sex) options(datadist='ddist') f <- lrm(y ~ lsp(age,50)+sex*rcs(cholesterol,4)+blood.pressure) nom <- nomogram(f, fun=function(x)1/(1+exp(-x)), # or fun=plogis fun.at=c(.001,.01,.05,seq(.1,.9,by=.1),.95,.99,.999), funlabel="Risk of Death") #Instead of fun.at, could have specified fun.lp.at=logit of #sequence above - faster and slightly more accurate plot(nom, xfrac=.45) Result: This code produces a nomogram but there is no line connecting each scale (called isopleth) to help predict the desired variable ("Risk of Death") from the plot. Usually, nomograms have the isopleth for prediction (example from wikipedia). But here, how do I predict the variable value? EDIT: From the documentation: The nomogram does not have lines representing sums, but it has a reference line for reading scoring points (default range 0--100). Once the reader manually totals the points, the predicted values can be read at the bottom. I don't understand this. It seems that predicting is supposed to be done without the isopleth, from the scale of points. but how? Can someone please elaborate with this example on how I can read the nomograms to predict the desired variable? Thanks a lot! EDIT 2 (FYI): In the description of the bounty, I am talking about the isopleth. When starting the bounty, I did not know that nomogram function does not provide isopleth and has points scale instead.
From the documentation, the nomogram is used to manualy obtain prediction: In the top of the plot (over Total points) you draw a vertical line for each of the variables of your patient (for example age=40, cholesterol=220 ( and sex=male ), blood.pressure=172) then you sum up the three values you read on the Points scale (40+60+3=103) to obtain Total Points. Finally you draw a vertical line on the Total Points scale (103) to read the Risk of death (0.55).
These are regression nomograms, and work in a different way to classic nomograms. A classic nomogram will perform a full calculation. For these nomograms you drop a line from each predictor to the scale at the bottom and add your results. The only way to have a classic 'isopleth' nomogram working on a regression model would be 1 have just two predictors or 2 have a complex multi- step nomogram.
R: Robust fitting of data points to a Gaussian function
I need to do some robust data-fitting operation. I have bunch of (x,y) data, that I want to fit to a Gaussian (aka normal) function. The point is, I want to remove the ouliers. As one can see on the sample plot below, there is another distribution of data thats pollutting my data on the right, and I don't want to take it into account to do the fitting (i.e. to find \sigma, \mu and the overall scale parameter). R seems to be the right tool for the job, I found some packages (robust, robustbase, MASS for example) that are related to robust fitting. However, they assume the user already has a strong knowledge of R, which is not my case, and the documentation is only provided as a sort of reference manual, no tutorial or equivalent. My statistical background is rather low, I attempted to read reference material on fitting with R, but it didn't really help (and I'm not even sure thats the right way to go). But I have the feeling that this is actually a quite simple operation. I have checked this related question (and the linked ones), however they take as input a single vector of values, and I have a vector of pairs, so I don't see how to transpose. Any help on how to do this would be appreciated.
Fitting a Gaussian curve to the data, the principle is to minimise the sum of squares difference between the fitted curve and the data, so we define f our objective function and run optim on it: fitG = function(x,y,mu,sig,scale){ f = function(p){ d = p[3]*dnorm(x,mean=p[1],sd=p[2]) sum((d-y)^2) } optim(c(mu,sig,scale),f) } Now, extend this to two Gaussians: fit2G <- function(x,y,mu1,sig1,scale1,mu2,sig2,scale2,...){ f = function(p){ d = p[3]*dnorm(x,mean=p[1],sd=p[2]) + p[6]*dnorm(x,mean=p[4],sd=p[5]) sum((d-y)^2) } optim(c(mu1,sig1,scale1,mu2,sig2,scale2),f,...) } Fit with initial params from the first fit, and an eyeballed guess of the second peak. Need to increase the max iterations: > fit2P = fit2G(data$V3,data$V6,6,.6,.02,8.3,0.10,.002,control=list(maxit=10000)) Warning messages: 1: In dnorm(x, mean = p[1], sd = p[2]) : NaNs produced 2: In dnorm(x, mean = p[4], sd = p[5]) : NaNs produced 3: In dnorm(x, mean = p[4], sd = p[5]) : NaNs produced > fit2P $par [1] 6.035610393 0.653149616 0.023744876 8.317215066 0.107767881 0.002055287 What does this all look like? > plot(data$V3,data$V6) > p = fit2P$par > lines(data$V3,p[3]*dnorm(data$V3,p[1],p[2])) > lines(data$V3,p[6]*dnorm(data$V3,p[4],p[5]),col=2) However I would be wary about statistical inference about your function parameters... The warning messages produced are probably due to the sd parameter going negative. You can fix this and also get a quicker convergence by using L-BFGS-B and setting a lower bound: > fit2P = fit2G(data$V3,data$V6,6,.6,.02,8.3,0.10,.002,control=list(maxit=10000),method="L-BFGS-B",lower=c(0,0,0,0,0,0)) > fit2P $par [1] 6.03564202 0.65302676 0.02374196 8.31424025 0.11117534 0.00208724 As pointed out, sensitivity to initial values is always a problem with curve fitting things like this.
Fitting a Gaussian: # your data set.seed(0) data <- c(rnorm(100,0,1), 10, 11) # find & remove outliers outliers <- boxplot(data)$out data <- setdiff(data, outliers) # fitting a Gaussian mu <- mean(data) sigma <- sd(data) # testing the fit, check the p-value reference.data <- rnorm(length(data), mu, sigma) ks.test(reference.data, data)
Confidence interval for Weibull distribution
I have wind data that I'm using to perform extreme value analysis (calculate return levels). I'm using R with packages 'evd', 'extRemes' and 'ismev'. I'm fitting GEV, Gumbel and Weibull distributions, in order to estimate the return levels (RL) for some period T. For the GEV and Gumbel cases, I can get RL's and Confidence Intervals using the extRemes::return.level() function. Some code: require(ismev) require(MASS) data(wind) x = wind[, 2] rperiod = 10 fit <- fitdistr(x, 'weibull') s <- fit$estimate['shape'] b <- fit$estimate['scale'] rlevel <- qweibull(1 - 1/rperiod, shape = s, scale = b) ## CI around rlevel ## ci.rlevel = ?? But for the Weibull case, I need some help to generate the CI's.
I suspect the excruciatingly correct answer will be that the joint confidence region is an ellipse or some bent-sausage shape but you can extract variance estimates for the parameters from the fit object with the vcov function and then build standard errors for which +/- 1.96 SE's should be informative: > sqrt(vcov(fit)["shape", "shape"]) [1] 0.691422 > sqrt(vcov(fit)["scale", "scale"]) [1] 1.371256 > s +c(-1,1)*sqrt(vcov(fit)["shape", "shape"]) [1] 6.162104 7.544948 > b +c(-1,1)*sqrt(vcov(fit)["scale", "scale"]) [1] 54.46597 57.20848 The usual way to calculate a CI for a single parameter is to assume Normal distribution and use theta+/- 1.96*SE(theta). In this case, you have two parameters so doing that with both of them would give you a "box", the 2D analog of an interval. The truly correct answer would be something more complex in the 'scale'-by-'shape' parameter space and might be most easily achieved with simulation methods, unless you have a better grasp of theory than I have.
R Nonlinear Least Squares (nls) Model Fitting
I'm trying to fit the information from the G function of my data to the following mathematical mode: y = A / ((1 + (B^2)*(x^2))^((C+1)/2)) . The shape of this graph can be seen here: http://www.wolframalpha.com/input/?i=y+%3D+1%2F+%28%281+%2B+%282%5E2%29*%28x%5E2%29%29%5E%28%282%2B1%29%2F2%29%29 Here's a basic example of what I've been doing: data(simdat) library(spatstat) simdat.Gest <- Gest(simdat) #Gest is a function within spatstat (explained below) Gvalues <- simdat.Gest$rs Rvalues <- simdat.Gest$r GvsR_dataframe <- data.frame(R = Rvalues, G = rev(Gvalues)) themodel <- nls(rev(Gvalues) ~ (1 / (1 + (B^2)*(R^2))^((C+1)/2)), data = GvsR_dataframe, start = list(B=0.1, C=0.1), trace = FALSE) "Gest" is a function found within the 'spatstat' library. It is the G function, or the nearest-neighbour function, which displays the distance between particles on the independent axis, versus the probability of finding a nearest neighbour particle on the dependent axis. Thus, it begins at y=0 and hits a saturation point at y=1. If you plot simdat.Gest, you'll notice that the curve is 's' shaped, meaning that it starts at y = 0 and ends up at y = 1. For this reason, I reveresed the vector Gvalues, which are the dependent variables. Thus, the information is in the correct orientation to be fitted the above model. You may also notice that I've automatically set A = 1. This is because G(r) always saturates at 1, so I didn't bother keeping it in the formula. My problem is that I keep getting errors. For the above example, I get this error: Error in nls(rev(Gvalues) ~ (1/(1 + (B^2) * (R^2))^((C + 1)/2)), data = GvsR_dataframe, : singular gradient I've also been getting this error: Error in nls(Gvalues1 ~ (1/(1 + (B^2) * (x^2))^((C + 1)/2)), data = G_r_dataframe, : step factor 0.000488281 reduced below 'minFactor' of 0.000976562 I haven't a clue as to where the first error is coming from. The second, however, I believe was occurring because I did not pick suitable starting values for B and C. I was hoping that someone could help me figure out where the first error was coming from. Also, what is the most effective way to pick starting values to avoid the second error? Thanks!
As noted your problem is most likely the starting values. There are two strategies you could use: Use brute force to find starting values. See package nls2 for a function to do this. Try to get a sensible guess for starting values. Depending on your values it could be possible to linearize the model. G = (1 / (1 + (B^2)*(R^2))^((C+1)/2)) ln(G)=-(C+1)/2*ln(B^2*R^2+1) If B^2*R^2 is large, this becomes approx. ln(G) = -(C+1)*(ln(B)+ln(R)), which is linear. If B^2*R^2 is close to 1, it is approx. ln(G) = -(C+1)/2*ln(2), which is constant. (Please check for errors, it was late last night due to the soccer game.) Edit after additional information has been provided: The data looks like it follows a cumulative distribution function. If it quacks like a duck, it most likely is a duck. And in fact ?Gest states that a CDF is estimated. library(spatstat) data(simdat) simdat.Gest <- Gest(simdat) Gvalues <- simdat.Gest$rs Rvalues <- simdat.Gest$r plot(Gvalues~Rvalues) #let's try the normal CDF fit <- nls(Gvalues~pnorm(Rvalues,mean,sd),start=list(mean=0.4,sd=0.2)) summary(fit) lines(Rvalues,predict(fit)) #Looks not bad. There might be a better model, but not the one provided in the question.