Conditioned random generating variables from a distribution function - r

My question is related to my previous one Generate random variables from a distribution function using inverse sampling
Now I want to generate random variables from a distribution function using inverse sampling but the sampling should be conditioned.
For example, if the inverse of my cdf is :
invcdf <- function(y) a2 * log(a1/y - 1) + a3
I used inverse sampling to generate 10 rv as follows :
invcdf(runif(10))
Now, the problem is that I want the values generated greater or less than a value.
How should I introduce this condition in random generator?
When I use this to have value greater than 500 :
invcdf(runif(10,500,1e6))
I get this error message :
Warning message:
In log((a0/y) - 1) : NaNs produced
I already try to repeat the process until having values satsifying my constraints but it is not efficient!
repeat{
x=invcdf(runif(1))
if(x>100){
break
}

As #spf614 noted, you'd better have checks in your function like
invcdf <- function(y) {
if (a1 > y) {
return( a2 * log(a1/y - 1) + a3 )
}
NaN
}
Then it works for all arguments
Sampling would be
low <- ...
r <- invcdf(runif(low, a1, 1e6))
UPDATE
checking for NaNs in output
nof_nans <- sum(is.nan(r))
if (nof_nans > 0) {
....

The reason that you're getting NaNs is that R is trying to take the logarithm of a negative number. Do you want the log term to be log((a1/y)-1) or log(a1/(y-1))? You currently have the function written the first way, and when you get a very high value for y, the term a1/y approaches zero (the speed with which it approaches zero depends on the value of a1). Thus, subtracting 1 gives you a negative number inside the log function. So if the term is meant to be how you have it written (log(a1/y-1)), you simply won't be able to calculate that above certain values of y.
The simple fix is just
invcdf <- function(y){
a2 * log(a1/(y-1)) + a3
}

Related

How to find the minimum floating-point value accepted by betareg package?

I'm doing a beta regression in R, which requires values between 0 and 1, endpoints excluded, i.e. (0,1) instead of [0,1].
I have some 0 and 1 values in my dataset, so I'd like to convert them to the smallest possible neighbor, such as 0.0000...0001 and 0.9999...9999. I've used .Machine$double.xmin (which gives me 2.225074e-308), but betareg() still gives an error:
invalid dependent variable, all observations must be in (0, 1)
If I use 0.000001 and 0.999999, I got a different set of errors:
1: In betareg.fit(X, Y, Z, weights, offset, link, link.phi, type, control) :
failed to invert the information matrix: iteration stopped prematurely
2: In sqrt(wpp) :
Error in chol.default(K) :
the leading minor of order 4 is not positive definite
Only if I use 0.0001 and 0.9999 I can run without errors. Is there any way I can improve this minimum values with betareg? Or should I just be happy with that?
Try it with eps (displacement from 0 and 1) first equal to 1e-4 (as you have here) and then with 1e-3. If the results of the models don't differ in any way you care about, that's great. If they are, you need to be very careful, because it suggests your answers will be very sensitive to assumptions.
In the example below the dispersion parameter phi changes a lot, but the intercept and slope parameter don't change very much.
If you do find that the parameters change by a worrying amount for your particular data, then you need to think harder about the process by which zeros and ones arise, and model that process appropriately, e.g.
a censored-data model: zero/one arise through a minimum/maximum detection threshold, models the zero/one values as actually being somewhere in the tails or
a hurdle/zero-one inflation model: zeros and ones arise through a separate process from the rest of the data, use a binomial or multinomial model to characterize zero vs. (0,1) vs. one, then use a Beta regression on the (0,1) component)
Questions about these steps are probably more appropriate for CrossValidated than for SO.
sample data
set.seed(101)
library(betareg)
dd <- data.frame(x=rnorm(500))
rbeta2 <- function(n, prob=0.5, d=1) {
rbeta(n, shape1=prob*d, shape2=(1-prob)*d)
}
dd$y <- rbeta2(500,plogis(1+5*dd$x),d=1)
dd$y[dd$y<1e-8] <- 0
trial fitting function
ss <- function(eps) {
dd <- transform(dd,
y=pmin(1-eps,pmax(eps,y)))
m <- try(betareg(y~x,data=dd))
if (inherits(m,"try-error")) return(rep(NA,3))
return(coef(m))
}
ss(0) ## fails
ss(1e-8) ## fails
ss(1e-4)
## (Intercept) x (phi)
## 0.3140810 1.5724049 0.7604656
ss(1e-3) ## also fails
ss(1e-2)
## (Intercept) x (phi)
## 0.2847142 1.4383922 1.3970437
ss(5e-3)
## (Intercept) x (phi)
## 0.2870852 1.4546247 1.2029984
try it for a range of values
evec <- seq(-4,-1,length=51)
res <- t(sapply(evec, function(e) ss(10^e)) )
library(ggplot2)
ggplot(data.frame(e=10^evec,reshape2::melt(res)),
aes(e,value,colour=Var2))+
geom_line()+scale_x_log10()

sampling a multimensional posterior distribution using MCMC Metropolis-Hastings algo in R

I am quite new in sampling posterior distributions(so therefore Bayesian approach) using a MCMC technique based on Metropolis-Hastings algorithm.
I am using the mcmc library in R for this. My distribution is multidimensionnal. In order to check if this metro algorithm works for multivaiate distribution I did it successfully on a multidimensional student-t distribution (package mvtnorm, function dmvt).
Now I want to apply the same thing to my multivariate distribution (2 vars x and y) but it doesn't work; I get an error : Error in X[, 1] : incorrect number of dimensions
Here is my code:
library(mcmc)
library(mvtnorm)
my.seed <- 123
logprior<-function(X,...)
{
ifelse( (-50.0 <= X[,1] & X[,1]<=50.0) & (-50.0 <= X[,2] & X[,2]<=50.0), return(0), return(-Inf))
}
logpost<-function(X,...)
{
log.like <- log( exp(-((X[,1]^2 + X[,2]^2 - 4)/10 )^2) * sin(4*atan(X[,2]/X[,1])) )
log.prior<-logprior(X)
log.post<-log.like + log.prior # if flat prior, the posterior distribution is the likelihood one
return (log.post)
}
x <- seq(-5,5,0.15)
y <- seq(-5,5,0.15)
X<-cbind(x,y)
#out <- metrop(function(X) dmvt(X, df=3, log=TRUE), 0, blen=100, nbatch=100) ; this works
out <- metrop(function(X) logpost(X), c(0,0), blen=100, nbatch=100)
out <- metrop(out)
out$accept
So I tried to respect the same kind of format than for the MWE, but it doesn't work still as I got the error mentioned before.
Another thing, is that applying logpost to X works perfectly.
Thanks in advance for your help, best
The metrop function passes individual samples, and therefore a simple vector to logpost, not a matrix (which is what X is). Hence, the solution is to change X[,1] and X[,2] to X[1] and X[2], respectively.
I ran it like this, and it leads to other issues (X[2]/X[1] is NaN for the initialization), but that has more to do with your specific likelihood model and is out of the scope of your question.

Numerical precision problems in R?

I have a problem with the following function in R:
test <- function(alpha, beta, n){
result <- exp(lgamma(alpha) + lgamma(n + beta) - lgamma(alpha + beta + n) - (lgamma(alpha) + lgamma(beta) - lgamma(alpha + beta)))
return(result)
}
Now if you insert the following values:
betabinom(-0.03292708, -0.3336882, 10)
It should fail and result in a NaN. That is because if we implement the exact function in Excel, we would get a result that is not a number. The implementation in Excel is simple, for J32 is a cell for alpha, K32 beta and L32 for N. The implementation of the resulting cell is given below:
=EXP(GAMMALN(J32)+GAMMALN(L32+K32)-GAMMALN(J32+K32+L32)-(GAMMALN(J32)+GAMMALN(K32)-GAMMALN(J32+K32)))
So this seems to give the correct answer, because the function is only defined for alpha and beta greater than zero and n greater or equal to zero. Therefore I am wondering what is happening here? I have also tried the package Rmpf to increase the numerical accuracy, but that does not seem to do anything.
Thanks
tl;dr log(gamma(x)) is defined more generally than you think, or than Excel thinks. If you want your function not to accept negative values of alpha and beta, or to return NaN, just test manually and return the appropriate values (if (alpha<0 || beta<0) return(NaN)).
It's not a numerical accuracy problem, it's a definition issue. The Gamma function is defined for negative real values: ?lgamma says:
The gamma function is defined by (Abramowitz and Stegun section 6.1.1, page 255)
Gamma(x) = integral_0^Inf t^(x-1) exp(-t) dt
for all real ‘x’ except zero and negative integers (when ‘NaN’ is returned).
Furthermore, referring to lgamma ...
... and the natural logarithm of the absolute value of the gamma function ...
(emphasis in original)
curve(lgamma(x),-1,1)
gamma(-0.1) ## -10.68629
log(gamma(-0.1)+0i) ## 2.368961+3.141593i
log(abs(gamma(-0.1)) ## 2.368961
lgamma(-0.1) ## 2.368961
Wolfram Alpha agrees with second calculation.

cost function in cv.glm of boot library in R

I am trying to use the crossvalidation cv.glm function from the boot library in R to determine the number of misclassifications when a glm logistic regression is applied.
The function has the following signature:
cv.glm(data, glmfit, cost, K)
with the first two denoting the data and model and K specifies the k-fold.
My problem is the cost parameter which is defined as:
cost: A function of two vector arguments specifying the cost function
for the crossvalidation. The first argument to cost should correspond
to the observed responses and the second argument should correspond to
the predicted or fitted responses from the generalized linear model.
cost must return a non-negative scalar value. The default is the
average squared error function.
I guess for classification it would make sense to have a function which returns the rate of misclassification something like:
nrow(subset(data, (predict >= 0.5 & data$response == "no") |
(predict < 0.5 & data$response == "yes")))
which is of course not even syntactically correct.
Unfortunately, my limited R knowledge let me waste hours and I was wondering if someone could point me in the correct direction.
It sounds like you might do well to just use the cost function (i.e. the one named cost) defined further down in the "Examples" section of ?cv.glm. Quoting from that section:
# [...] Since the response is a binary variable an
# appropriate cost function is
cost <- function(r, pi = 0) mean(abs(r-pi) > 0.5)
This does essentially what you were trying to do with your example. Replacing your "no" and "yes" with 0 and 1, lets say you have two vectors, predict and response. Then cost() is nicely designed to take them and return the mean classification rate:
## Simulate some reasonable data
set.seed(1)
predict <- seq(0.1, 0.9, by=0.1)
response <- rbinom(n=length(predict), prob=predict, size=1)
response
# [1] 0 0 0 1 0 0 0 1 1
## Demonstrate the function 'cost()' in action
cost(response, predict)
# [1] 0.3333333 ## Which is right, as 3/9 elements (4, 6, & 7) are misclassified
## (assuming you use 0.5 as the cutoff for your predictions).
I'm guessing the trickiest bit of this will be just getting your mind fully wrapped around the idea of passing a function in as an argument. (At least that was for me, for the longest time, the hardest part of using the boot package, which requires that move in a fair number of places.)
Added on 2016-03-22:
The function cost(), given above is in my opinion unnecessarily obfuscated; the following alternative does exactly the same thing but in a more expressive way:
cost <- function(r, pi = 0) {
mean((pi < 0.5) & r==1 | (pi > 0.5) & r==0)
}
I will try to explain the cost function in simple words. Let's take
cv.glm(data, glmfit, cost, K) arguments step by step:
data
The data consists of many observations. Think of it like series of numbers or even.
glmfit
It is generalized linear model, which runs on the above series. But there is a catch it splits data into several parts equal to K. And runs glmfit on each of them separately (test set), taking the rest of them as training set. The output of glmfit is a series consisting of same number of elements as the split input passed.
cost
Cost Function. It takes two arguments first the split input series(test set), and second the output of glmfit on the test input. The default is mean square error function.
.
It sums the square of difference between observed data point and predicted data point. Inside the function a loop runs over the test set (output and input should have same number of elements) calculates difference, squares it and adds to output variable.
K
The number to which the input should be split. Default gives leave one out cross validation.
Judging from your cost function description. Your input(x) would be a set of numbers between 0 and 1 (0-0.5 = no and 0.5-1 = yes) and output(y) is 'yes' or 'no'. So error(e) between observation(x) and prediction(y) would be :
cost<- function(x, y){
e=0
for (i in 1:length(x)){
if(x[i]>0.5)
{
if( y[i]=='yes') {e=0}
else {e=x[i]-0.5}
}else
{
if( y[i]=='no') {e=0}
else {e=0.5-x[i]}
}
e=e*e #square error
}
e=e/i #mean square error
return (e)
}
Sources : http://www.cs.cmu.edu/~schneide/tut5/node42.html
The cost function can optionally be defined if there is one you prefer over the default average squared error. If you wanted to do so then the you would write a function that returns the cost you want to minimize using two inputs: (1) the vector of known labels that you are predicting, and (2) the vector of predicted probabilities from your model for those corresponding labels. So for the cost function that (I think) you described in your post you are looking for a function that will return the average number of accurate classifications which would look something like this:
cost <- function(labels,pred){
mean(labels==ifelse(pred > 0.5, 1, 0))
}
With that function defined you can then pass it into your glm.cv() call. Although I wouldn't recommend using your own cost function over the default one unless you have reason to. Your example isn't reproducible, so here is another example:
> library(boot)
>
> cost <- function(labels,pred){
+ mean(labels==ifelse(pred > 0.5, 1, 0))
+ }
>
> #make model
> nodal.glm <- glm(r ~ stage+xray+acid, binomial, data = nodal)
> #run cv with your cost function
> (nodal.glm.err <- cv.glm(nodal, nodal.glm, cost, nrow(nodal)))
$call
cv.glm(data = nodal, glmfit = nodal.glm, cost = cost, K = nrow(nodal))
$K
[1] 53
$delta
[1] 0.8113208 0.8113208
$seed
[1] 403 213 -2068233650 1849869992 -1836368725 -1035813431 1075589592 -782251898
...
The cost function defined in the example for cv.glm clearly assumes that the predictions are probabilities, which would require the type="response" argument in the predict function. The documentation from library(boot) should state this explicitly. I would otherwise be forced to assume that the default type="link" is used inside the cv.glm function, in which case the cost function would not work as intended.

computing an intergral with multiple variables in R

Hi I have a equation like the following that I want to calculate.
The equation is given by :
In this equation x is an arrary from 0 to 500.
The value of t = 500 i.e upper limit of the integration.
Now I want to compute c as c(500,x).
The code that I have written so far is as follows:
x <- seq(from=0,by=0.5,length=1000)
t=500
integrand <- function(t)t^(-0.5)*exp((-x^2/t)-t)
integrated <- integrate(integrand, lower=0, upper=t)
final <- pi^(-0.5)*exp(2*x)*integrated
The error I get is as follows:
Error in integrate(integrand, lower = 0, upper = t) :
evaluation of function gave a result of wrong length
In addition: Warning messages:
1: In -x^2/t :
longer object length is not a multiple of shorter object length
2: In -x^2/t - t :
longer object length is not a multiple of shorter object length
3: In t^(-0.5) * exp(-x^2/t - t) :
longer object length is not a multiple of shorter object length
But it doesn't work because there is a variable x inside the integrand which is an arrary. Can anyone suggest how can I compute the integration first and then calculate the total expression for each value of x ? If I change the value of x in the integrand to constant I can compute integration but I want to compute for all the values of x from 0 to 500.
Thank you so much.
Well, here is some code, but it blows up after t=353:
Cfun <- function(XX, upper){
integrand <- function(x)x^(-0.5)*exp((-XX^2/x)-x)
integrated <- integrate(integrand, lower=0, upper=upper)$value
(final <- pi^(-0.5)*exp(2*XX)*integrated) }
sapply(1:400, Cfun, upper=500)
I'd put the loop over values for x outside the integration. Iterate over the x-values and perform the integration for each one inside. Then you'll have C(x) as a function of x suitable for plotting.
You realize, of course, that the indefinite integral can be evaluated:
http://www.wolframalpha.com/input/?i=integrate+exp%28-%28c%2Bt%5E2%29%2Ft%29%2Fsqrt%28t%29
Maybe that will help you see what the answer looks like before you get started.

Resources