How to partition variance among parameters in a Monte Carlo simulation? - r

I have some doubts as how to better extract information from a Monte Carlo simulation, I will simplify the problem here, using some R pseudocode but the question is more general.
Let's say I have a function with three parameters, each of them with a mean and SD. For this example I will use normal distributions, but for more general cases let's assume one of them is another distribution.
f(x,y,z) = rnorm(mean_x, sd_x) * rnorm(mean_y, sd_y) * rnorm(mean_z, sd_z)
I am using a Monte Carlo simulation to quantify the uncertainty, quite straight-forward. I am interested though in understanding what % of the total uncertainty corresponds to each parameter, done computationally and not analytically.
One way I have envisioned would be:
Define total uncertainty as the 0.05-0.95 quantiles of the full MC simulation, and this would be 100% of the model uncertainty.
Do a new simulation, but now set one parameter fixed using the mean value, such as:
f(x,y,z) = mean_x * rnorm(mean_y, sd_y) * rnorm(mean_z, sd_z)
The difference between the "total uncertainty" and the 0.05-0.95 quantiles of this simulation would then be amount of uncertainty related to this specific parameter.
Repeat for the other parameters.
I know this is a simplification as it ignores interactions among parameters, but would this be correct? The problem in question is slightly more complex than this so other analytical approaches are not that feasible.

Related

Random deviations of pert distribution

I'm working with monte carlo using R with the following code:
A)
mc_matrix = 1
for (i in 1:1000000){
mc_sample = rpert(n=1,min=629,max=1049,mode=739)
mc_matrix = rbind(mc_matrix, mc_sample)
}
mean(mc_matrix)
B)
mean_of_matrix = rpert(1000000, min=629, max=1049, mode=739)
Should these two instances of code not be the same? How come I'm not getting the same average having so many samples from the distribution?
as first it would be good to let everybody know the packages you need. In your case it is the package "freedom".
Than, the newest version needs the input with x.min, x.max and x.mode.
In a Monte Carlo simulation you simulate random variables to calculate for example the mean as in your example. The problem is that this is just an asymptotic approximation of the distribution.
If you try this with the implemented rnorm(n) function you get different results for the mean, even if the true mean of every simulated normal distributed random variable is the same.
So if you try
mean(rnorm(10000))
mean(rnorm(10000))
the results will slightly differ.
Every programming language has an implemented pseudo random variable generator. If you need the same random variables again and again you can use the function set.seed(seed) to start the random variable generator at the same point.
Try
set.seed(100)
mean(rnorm(1000))
set.seed(100)
mean(rnorm(1000))
you will get the same results.
You can try this with your example, but the results will still differ because you do some calculations in the first example while you just calculate the mean in the last. But you are right that the results should be the same with the same random variables because it's the same calculation.
Thats a basic principle of Monte Carlo Simulations to simulate n, with n large, random variables to approximate an asymptotic distribution.

bnlearn::bn.fit difference and calculation of methods "mle" and "bayes"

I try to understand the differences between the two methods bayes and mle in the bn.fit function of the package bnlearn.
I know about the debate between the frequentist and the bayesian approach on understanding probabilities. On a theoretical level I suppose the maximum likelihood estimate mle is a simple frequentist approach setting the relative frequencies as the probability. But what calculations are done to get the bayes estimate? I already checked out the bnlearn documenation, the description of the bn.fit function and some application examples, but nowhere there's a real description of what's happening.
I also tried to understand the function in R by first checking out bnlearn::bn.fit, leading to bnlearn:::bn.fit.backend, leading to bnlearn:::smartSapply but then I got stuck.
Some help would be really appreciated as I use the package for academic work and therefore I should be able to explain what happens.
Bayesian parameter estimation in bnlearn::bn.fit applies to discrete variables. The key is the optional iss argument: "the imaginary sample size used by the bayes method to estimate the conditional probability tables (CPTs) associated with discrete nodes".
So, for a binary root node X in some network, the bayes option in bnlearn::bn.fit returns (Nx + iss / cptsize) / (N + iss) as the probability of X = x, where N is your number of samples, Nx the number of samples with X = x, and cptsize the size of the CPT of X; in this case cptsize = 2. The relevant code is in the bnlearn:::bn.fit.backend.discrete function, in particular the line: tab = tab + extra.args$iss/prod(dim(tab))
Thus, iss / cptsize is the number of imaginary observations for each entry in a CPT, as opposed to N, the number of 'real' observations. With iss = 0 you would be getting a maximum likelihood estimate, as you would have no prior imaginary observations.
The higher iss with respect to N, the stronger the effect of the prior on your posterior parameter estimates. With a fixed iss and a growing N, the Bayesian estimator and the maximum likelihood estimator converge to the same value.
A common rule of thumb is to use a small non-zero iss so that you avoid zero entries in the CPTs, corresponding to combinations that were not observed in the data. Such zero entries could then result in a network which generalizes poorly, such as some early versions of the Pathfinder system.
For more details on Bayesian parameter estimation you can have a look at the book by Koller and Friedman. I suppose many other Bayesian network books also cover the topic.

How to perform a Multivariate Polynomial Regression when output has stochastic behavior?

I have a experiment being simulated. This experiment has 3 parameters a,b,c (variables?) but the result, r, cannot be "predicted" as it has a stochastic component. In order to minimize the stochastic component I've run this experiment several times(n). So in resume I have n 4-tuples a,b,c,r where a,b,c are the same but r varies. And each batch of experiments is run with different values for a, b, c (k batches) making the complete data-set having k times n sets of 4-tuples.
I would like to find out the best polynomial fit for this data and how to compare them like:
fit1: with
fit2: with
fit3: some 3rd degree polynomial function and corresponding error
fit4: another 3rd degree (simpler) polynomial function and corresponding error
and so on...
This could be done with R or Matlab®. I've searched and found many examples but none handled same input values with different outputs.
I considered doing the multivariate polynomial regression n times adding some small delta to each parameter but I'd rather take a cleaner sollution before that.
Any help would be appreciated.
Thanks in advance,
Jacques
Polynomial regression should be able to handle stochastic simulations just fine. Just simulate r, n times, and perform a multivariate polynomial regression across all points you've simulated (I recommend polyfitn()).
You'll have multiple r values for the same [a,b,c] but a well-fit curve should be able to estimate the true distribution.
In polyfitn it will look something like this
n = 1000;
a = rand(500,1);
b = rand(500,1);
c = rand(500,1);
for n = 1:1000
for i = 1:length(a)
r(n,i) = foo(a,b,c);
end
end
my_functions = {'a^2 b^2 c^2 a b c',...};
for fun_id = 1:length(my_functions)
p{f_id} = polyfitn(repmat([a,b,c],[n,1]),r(:),myfunctions{fun_id})
end
It's not hard to iteratively/recursively generate a set of polynomial equations from a basis function; but for three variables there might not be a need to. Unless you have a specific reason for fitting higher order polynomials (planetary physics, particle physics, etc. physics), you shouldn't have too many functions to fit. It is generally not good practice to use higher-order polynomials to explain data unless you have a specific reason for doing so (risk of overfitting, sparse data inter-variable noise, more accurate non-linear methods).

Simulate a distribution with a given kurtosis and skewness in r? [duplicate]

Is it possible to generate distributions in R for which the Mean, SD, skew and kurtosis are known? So far it appears the best route would be to create random numbers and transform them accordingly.
If there is a package tailored to generating specific distributions which could be adapted, I have not yet found it.
Thanks
There is a Johnson distribution in the SuppDists package. Johnson will give you a distribution that matches either moments or quantiles. Others comments are correct that 4 moments does not a distribution make. But Johnson will certainly try.
Here's an example of fitting a Johnson to some sample data:
require(SuppDists)
## make a weird dist with Kurtosis and Skew
a <- rnorm( 5000, 0, 2 )
b <- rnorm( 1000, -2, 4 )
c <- rnorm( 3000, 4, 4 )
babyGotKurtosis <- c( a, b, c )
hist( babyGotKurtosis , freq=FALSE)
## Fit a Johnson distribution to the data
## TODO: Insert Johnson joke here
parms<-JohnsonFit(babyGotKurtosis, moment="find")
## Print out the parameters
sJohnson(parms)
## add the Johnson function to the histogram
plot(function(x)dJohnson(x,parms), -20, 20, add=TRUE, col="red")
The final plot looks like this:
You can see a bit of the issue that others point out about how 4 moments do not fully capture a distribution.
Good luck!
EDIT
As Hadley pointed out in the comments, the Johnson fit looks off. I did a quick test and fit the Johnson distribution using moment="quant" which fits the Johnson distribution using 5 quantiles instead of the 4 moments. The results look much better:
parms<-JohnsonFit(babyGotKurtosis, moment="quant")
plot(function(x)dJohnson(x,parms), -20, 20, add=TRUE, col="red")
Which produces the following:
Anyone have any ideas why Johnson seems biased when fit using moments?
This is an interesting question, which doesn't really have a good solution. I presume that even though you don't know the other moments, you have an idea of what the distribution should look like. For example, it's unimodal.
There a few different ways of tackling this problem:
Assume an underlying distribution and match moments. There are many standard R packages for doing this. One downside is that the multivariate generalisation may be unclear.
Saddlepoint approximations. In this paper:
Gillespie, C.S. and Renshaw, E. An improved saddlepoint approximation. Mathematical Biosciences, 2007.
We look at recovering a pdf/pmf when given only the first few moments. We found that this approach works when the skewness isn't too large.
Laguerre expansions:
Mustapha, H. and Dimitrakopoulosa, R. Generalized Laguerre expansions of multivariate probability densities with moments. Computers & Mathematics with Applications, 2010.
The results in this paper seem more promising, but I haven't coded them up.
This question was asked more than 3 years ago, so I hope my answer doesn't come too late.
There is a way to uniquely identify a distribution when knowing some of the moments. That way is the method of Maximum Entropy. The distribution that results from this method is the distribution that maximizes your ignorance about the structure of the distribution, given what you know. Any other distribution that also has the moments that you specified but is not the MaxEnt distribution is implicitly assuming more structure than what you input. The functional to maximize is Shannon's Information Entropy, $S[p(x)] = - \int p(x)log p(x) dx$. Knowing the mean, sd, skewness and kurtosis, translate as constraints on the first, second, third, and fourth moments of the distribution, respectively.
The problem is then to maximize S subject to the constraints:
1) $\int x p(x) dx = "first moment"$,
2) $\int x^2 p(x) dx = "second moment"$,
3) ... and so on
I recommend the book "Harte, J., Maximum Entropy and Ecology: A Theory of Abundance, Distribution, and Energetics (Oxford University Press, New York, 2011)."
Here is a link that tries to implement this in R:
https://stats.stackexchange.com/questions/21173/max-entropy-solver-in-r
One solution for you might be the PearsonDS library. It allows you to use a combination of the first four moments with the restriction that kurtosis > skewness^2 + 1.
To generate 10 random values from that distribution try:
library("PearsonDS")
moments <- c(mean = 0,variance = 1,skewness = 1.5, kurtosis = 4)
rpearson(10, moments = moments)
I agree you need density estimation to replicate any distribution. However, if you have hundreds of variables, as is typical in a Monte Carlo simulation, you would need to have a compromise.
One suggested approach is as follows:
Use the Fleishman transform to get the coefficient for the given skew and kurtosis. Fleishman takes the skew and kurtosis and gives you the coefficients
Generate N normal variables (mean = 0, std = 1)
Transform the data in (2) with the Fleishman coefficients to transform the normal data to the given skew and kurtosis
In this step, use data from from step (3) and transform it to the desired mean and standard deviation (std) using new_data = desired mean + (data from step 3)* desired std
The resulting data from Step 4 will have the desired mean, std, skewness and kurtosis.
Caveats:
Fleishman will not work for all combinations of skewness and kurtois
Above steps assume non-correlated variables. If you want to generate correlated data, you will need a step before the Fleishman transform
Those parameters don't actually fully define a distribution. For that you need a density or equivalently a distribution function.
The entropy method is a good idea, but if you have the data samples you use more information compared to the use of only the moments! So a moment fit is often less stable. If you have no more information about how the distribution looks like then entropy is a good concept, but if you have more information, e.g. about the support, then use it! If your data is skewed and positive then using a lognormal model is a good idea. If you know also the upper tail is finite, then do not use the lognormal, but maybe the 4-parameter Beta distribution. If nothing is known about support or tail characteristics, then maybe a scaled and shifted lognormal model is fine. If you need more flexibility regarding kurtosis, then e.g. a logT with scaling + shifting is often fine. It can also help if you known that the fit should be near-normal, if this is the case then use a model which includes the normal distribution (often the case anyway), otherwise you may e.g. use a generalized secant-hyperbolic distribution. If you want to do all this, then at some point the model will have some different cases, and you should make sure that there are no gaps or bad transition effects.
As #David and #Carl wrote above, there are several packages dedicated to generate different distributions, see e.g. the Probability distributions Task View on CRAN.
If you are interested in the theory (how to draw a sample of numbers fitting to a specific distribution with the given parameters) then just look for the appropriate formulas, e.g. see the gamma distribution on Wiki, and make up a simple quality system with the provided parameters to compute scale and shape.
See a concrete example here, where I computed the alpha and beta parameters of a required beta distribution based on mean and standard deviation.

Quadrature to approximate a transformed beta distribution in R

I am using R to run a simulation in which I use a likelihood ratio test to compare two nested item response models. One version of the LRT uses the joint likelihood function L(θ,ρ) and the other uses the marginal likelihood function L(ρ). I want to integrate L(θ,ρ) over f(θ) to obtain the marginal likelihood L(ρ). I have two conditions: in one, f(θ) is standard normal (μ=0,σ=1), and my understanding is that I can just pick a number of abscissa points, say 20 or 30, and use Gauss-Hermite quadrature to approximate this density. But in the other condition, f(θ) is a linearly transformed beta distribution (a=1.25,b=10), where the linear transformation B'=11.14*(B-0.11) is such that B' also has (approximately) μ=0,σ=1.
I am confused enough about how to implement quadrature for a beta distribution but then the linear transformation confuses me even more. My question is threefold: (1) can I use some variation of quadrature to approximate f(θ) when θ is distributed as this linearly transformed beta distribution, (2) how would I implement this in R, and (3) is this a ridiculous waste of time such that there is an obviously much faster and better method to accomplish this task? (I tried writing my own numerical approximation function but found that my implementation of it, being limited to the R language, was just too slow to suffice.)
Thanks!
First, I assume you can express your L(θ,ρ) and f(θ) in terms of actual code; otherwise you're kinda screwed. Given that assumption, you can use integrate to perform the necessary computations. Something like this should get you started; just plug in your expressions for L and f.
marglik <- function(rho) {
integrand <- function(theta, rho) L(theta, rho) * f(theta)
# set your lower/upper integration limits as appropriate
integrate(integrand, lower=-5, upper=5, rho=rho)
}
For this to work, your integrand has to be vectorized; ie, given a vector input for theta, it must return a vector of outputs. If your code doesn't fit the bill, you can use Vectorize on the integrand function before passing it to integrate:
integrand <- Vectorize(integrand, "theta")
Edit: not sure if you're also asking how to define f(θ) for the transformed beta distribution; that seems rather elementary for someone working with joint and marginal likelihoods. But if you are, then the density of B' = a*B + b, given f(B), is
f'(B') = f(B)/a = f((B' - b)/a) / a
So in your case, f(theta) is dbeta(theta/11.14 + 0.11, 1.25, 10) / 11.14

Resources