Im trying to use the fitdist function in R to fit data to three different distributions by maximum likelihood to compare them. Lognormal and Weibull work fine, but I am struggling with Inverse Gaussian.
I need to specify starting values, however when I do I get an error message.
fw<-fitdist(claims,"weibull") WORKS
fln<-fitdist(claims,"lnorm") WORKS
fig<-fitdist(claims,"invgauss",start=list(mu=0,lambda=1)) DOES NOT WORK
Error: 'The pinvgauss function should return a zero-length vector when input has length zero and not raise an error'
What is wrong with my code?
I was working with a similar issue and found the issue was with how I labeled my start values. The actuar library I was working with required the labels "mean" and "shape" on the values. The following code provided me a solution:
library(actuar)
library(fitdistrplus)
fig <- fitdist(claims, "invgauss", start = list(mean = 5, shape = 1))
Related
I have been given matlab code and need to figure out how to do the same in R.
These are my instructions:
then for each column (variable) all the rank values in Rtr are transformed back to 'actual' values by the built-in norminv() function which is just the inverse of the Gaussian cumulative density function
Basically the step I cannot figure out is what in matlab is the norminv() command, more specifically it looks like this:
output(:,i) = norminv(data(:,i)/(N+1),0,1)
I have tried this solution given in another thread:
library(actuar)
library(fitdistrplus)
fig <- fitdist(claims, "invgauss", start = list(mean = 5, shape = 1))
But as far as I can tell from the output it doesn't actually give you transformed data but just a test of how well the data fits the inverse gaussian.
Does anyone have a good solution to my problem? Or am I missing something in the output I get from that other solution?
I am trying to use a smoothing spline on my dataset. I use smooth.spline function. And want to plot my fit next. However, for some reason it won't plot my model. It doesn't even give any error. I only get a error message after running smooth.spline function that 'cross-validation with non-unique 'x' values seems doubtful'. But I don't think it shouldn't make too much of a difference to the practical result.
My code is:
library('splines')
fit_spline <- smooth.spline(data.train$age,data.train$effect,cv = TRUE)
plot(data$effect,data$age,col="grey")
lines(fit_spline,lwd=2,col="purple")
legend("topright",("Smoothing Splines with 5.048163 df selected by CV"),col="purple",lwd=2)
What I get is:
Can someone tell me what I am doing wrong here?
Two issues:
Number 1. If you do smooth.spline(x, y), plot your data with plot(x, y) not plot(y, x).
Number 2. Don’t pass in data.train for fitting then a different dataset data for plotting. If you want to see how the spline looks like at new data points, use predict.smooth.spline first. See ?predict.smooth.spline.
I am trying to use a smoothing spline on my dataset. I use smooth.spline function. And want to plot my fit next. However, for some reason it won't plot my model. It doesn't even give any error. I only get a error message after running smooth.spline function that 'cross-validation with non-unique 'x' values seems doubtful'. But I don't think it shouldn't make too much of a difference to the practical result.
My code is:
library('splines')
fit_spline <- smooth.spline(data.train$age,data.train$effect,cv = TRUE)
plot(data$effect,data$age,col="grey")
lines(fit_spline,lwd=2,col="purple")
legend("topright",("Smoothing Splines with 5.048163 df selected by CV"),col="purple",lwd=2)
What I get is:
Can someone tell me what I am doing wrong here?
Two issues:
Number 1. If you do smooth.spline(x, y), plot your data with plot(x, y) not plot(y, x).
Number 2. Don’t pass in data.train for fitting then a different dataset data for plotting. If you want to see how the spline looks like at new data points, use predict.smooth.spline first. See ?predict.smooth.spline.
I have some time to event data that I need to generate around 200 shape/scale parameters for subgroups for a simulation model. I have analysed the data, and it best follows a weibull distribution.
Normally, I would use the fitdistrplus package and fitdist(x, "weibull") to do so, however this data has been matched using kernel matching and I have a variable of weighting values called km and so needs to incorporate a weight, which isn't something fitdist can do as far as I can tell.
With my gamma distributed data instead of using fitdist I did the calculation manually using the wtd.mean and wtd.var functions from the hsmisc package, which worked well. However, finding a similar formula for the weibull is eluding me.
I've been testing a few options and comparing them against the fitdist results:
test_data <- rweibull(100, 0.676, 946)
fitweibull <- fitdist(test_data, "weibull", method = "mle", lower = c(0,0))
fitweibull$estimate
shape scale
0.6981165 935.0907482
I first tested this: The Weibull distribution in R (ExtDist)
library(bbmle)
m1 <- mle2(y~dweibull(shape=exp(lshape),scale=exp(lscale)),
data=data.frame(y=test_data),
start=list(lshape=0,lscale=0))
which gave me lshape = -0.3919991 and lscale = 6.852033
The other thing I've tried is eweibull from the EnvStats package.
eweibull <- eweibull(test_data)
eweibull$parameters
shape scale
0.698091 935.239277
However, while these are giving results, I still don't think I can fit my data with the weights into any of these.
Edit: I have also tried the similarly named eWeibull from the ExtDist package (which I'm not 100% sure still works, but does have a weibull function that takes a weight!). I get a lot of error messages about the inputs being non-computable (NA or infinite). If I do it with map, so map(test_data, test_km, eWeibull) I get [[NULL] for all 100 values. If I try it just with test_data, I get a long string of errors associated with optimx.
I have also tried fitDistr from propagate which gives errors that weights should be a specific length. For example, if both are set to be 100, I get an error that weights should be length 94. If I set it to 94, it tells me it has to be length of 132.
I need to be able to pass either a set of pre-weighted mean/var/sd etc data into the calculation, or have a function that can take data and weights and use them both in the calculation.
After much trial and error, I edited the eweibull function from the EnvStats package to instead of using mean(x) and sd(x), to instead use wtd.mean(x,w) and sqrt(wtd.var(x, w)). This now runs and outputs weighted values.
So I have this discrete set of data my_dat that I am trying to fit a curve over to be able to generate random variables based on my_dat. I had great success using fitdistrplus on continuous data but have many errors when attempting to use it for discrete data.
Table settings:
library(fitdistrplus)
my_dat <- c(2,5,3,3,3,1,1,2,4,6,
3,2,2,8,3,4,3,3,4,4,
2,1,5,3,1,2,2,4,3,4,
2,4,1,6,2,3,2,1,2,4,
5,1,2,3,2)
I take a look at the histogram of the data first:
hist(my_dat)
Since the data's discrete, I decide to try a binomial distribution or the negative binomial distribution to fit and this is where I run into trouble: Here I try to define each:
fitNB3 <- fitdist(my_dat, discrete = T, distr = "nbinom" ) #NaNs Produced
fitB3 <- fitdist(my_dat, discrete = T, distr = "binom")
I receive two errors:
fitNB3 seems to run but notes that "NaNs Produced" - can anyone let me
know why this is the case?
fitB3 doesn't run at all and provides me with the error: "Error in start.arg.default(data10, distr = distname) : Unknown starting values for distribution binom." - can anyone point out why this won't work here? I am unclear about providing a starting number given that the data is discrete (I attempted to use start = 1 in the fitdist function but I received another error: "Error in fitdist(my_dat, discrete = T, distr = "binom", start = 1) : the function mle failed to estimate the parameters, with the error code 100"
I've been spinning my wheels for a while on this but I would be take any feedback regarding these errors.
Don't use hist on discrete data, because it doesn't do what you think it's doing.
Compare plot(table(my_dat)) with hist(my_dat)... and then ponder how many wrong impressions you've gotten doing this before. If you must use hist, make sure you specify the breaks, don't rely on defaults designed for continuous variables.
hist(my_dat)
lines(table(my_dat),col=4,lwd=6,lend=1)
Neither of your models can be suitable as both these distributions start from 0, not 1, and with the size of values you have, p(0) will not be ignorably small.
I don't get any errors fitting the negative binomial when I run your code.
The issue you had with fitting the binomial is you need to supply starting values for the parameters, which are called size (n) and prob (p), so
you'd need to say something like:
fitdist(my_dat, distr = "binom", start=list(size=15, prob=0.2))
However, you will then get a new problem! The optimizer assumes that the parameters are continuous and will fail on size.
On the other hand this is probably a good thing because with unknown n MLE is not well behaved, particularly when p is small.
Typically, with the binomial it would be expected that you know n. In that case, estimation of p could be done as follows:
fitdist(my_dat, distr = "binom", fix.arg=list(size=20), start=list(prob=0.15))
However, with fixed n, maximum likelihood estimation is straightforward in any case -- you don't need an optimizer for that.
If you really don't know n, there are a number of better-behaved estimators than the MLE to be found, but that's outside the scope of this question.