Fixing an error in a function that uses `replicate()` in R? - r

I have an R function called RR. I'm wondering how to fix the following error:
Error in rbinom(1, size = n, prob = p) :
promise already under evaluation: recursive default argument reference or
earlier problems?
RR = function(n, p, n.sim){
fun <- function(n = n, p = p){
x <- rbinom(1, size = n, prob = p)
res <- binom.test(x, n, p)[[4]]
c(Lower = res[1], Upper = res[2])
}
sim <- t(replicate(n.sim, fun()))
mean(sim[,1] <= p & p<= sim[,2])
}
# Example of use:
RR(n = 15, p = .5, n.sim = 5)

R throws this error message when you define defaults for a function that have the same name as the function parameters, and then call that function from within another function with the same parameter names. So function(x = x) is generally not a good idea. If you just change fun to
fun <- function(n2 = n, p2 = p) your code runs without issues.
I do not completely understand myself why this happens, but it is easy to avoid.

Related

Avoiding duplication in R

I am trying to fit a variety of (truncated) probability distributions to the same very thin set of quantiles. I can do it but it seems to require lots of duplication of the same code. Is there a neater way?
I am using this code by Nadarajah and Kotz to generate the pdf of the truncated distributions:
qtrunc <- function(p, spec, a = -Inf, b = Inf, ...)
{
tt <- p
G <- get(paste("p", spec, sep = ""), mode = "function")
Gin <- get(paste("q", spec, sep = ""), mode = "function")
tt <- Gin(G(a, ...) + p*(G(b, ...) - G(a, ...)), ...)
return(tt)
}
where spec can be the name of any untruncated distribution for which code in R exists, and the ... argument is used to provide the names of the parameters of that untruncated distribution.
To achieve the best fit I need to measure the distance between the given quantiles and those calculated using arbitrary values of the parameters of the distribution. In the case of the gamma distribution, for example, the code is as follows:
spec <- "gamma"
fit_gamma <- function(x, l = 0, h = 20, t1 = 5, t2 = 13){
ct1 <- qtrunc(p = 1/3, spec, a = l, b = h, shape = x[1],rate = x[2])
ct2 <- qtrunc(p = 2/3, spec, a = l, b = h, shape = x[1],rate = x[2])
dist <- vector(mode = "numeric", length = 2)
dist[1] <- (t1 - ct1)^2
dist[2] <- (t2- ct2)^2
return(sqrt(sum(dist)))
}
where l is the lower truncation, h is the higher and I am given the two tertiles t1 and t2.
Finally, I seek the best fit using optim, thus:
gamma_fit <- optim(par = c(2, 4),
fn = fit_gamma,
l = l,
h = h,
t1 = t1,
t2 = t2,
method = "L-BFGS-B",
lower = c(1.01, 1.4)
Now suppose I want to do the same thing but fitting a normal distribution instead. The names of the parameters of the normal distribution that I am using in R are mean and sd.
I can achieve what I want but only by writing a whole new function fit_normal that is extremely similar to my fit_gamma function but with the new parameter names used in the definition of ct1 and ct2.
The problem of duplication of code becomes very severe because I wish to try fitting a large number of different distributions to my data.
What I want to know is whether there is a way of writing a generic fit_spec as it were so that the parameter names do not have to be written out by me.
Use x as a named list to create a list of arguments to pass into qtrunc() using do.call().
fit_distro <- function(x, spec, l = 0, h = 20, t1 = 5, t2 = 13){
args <- c(x, list(spec = spec, a = l, b = h))
ct1 <- do.call(qtrunc, args = c(list(p = 1/3), args))
ct2 <- do.call(qtrunc, args = c(list(p = 2/3), args))
dist <- vector(mode = "numeric", length = 2)
dist[1] <- (t1 - ct1)^2
dist[2] <- (t2 - ct2)^2
return(sqrt(sum(dist)))
}
This is called as follows, which is the same as your original function.
fit_distro(list(shape = 2, rate = 3), "gamma")
# [1] 13.07425
fit_gamma(c(2, 3))
# [1] 13.07425
This will work with other distributions, for however many parameters they have.
fit_distro(list(mean = 10, sd = 3), "norm")
# [1] 4.08379
fit_distro(list(shape1 = 2, shape2 = 3, ncp = 10), "beta")
# [1] 12.98371

How can I reproduce the dist function

I simulated a data frame of points, x and y values, for various calculations. The dist function works pretty well to calculate the distances between every possible combination. And I've been trying to reproduce a simplified version that only does that (getting a Euclidean distance matrix of a data frame) but it hasn't been working so far.
If I was entering the two columns, I would do something like this but I'm trying to use just one input, the data frame.
dist <- function(x,y) {
distance <- sqrt(sum((x - y)^2))
return(distance)
}
I've tried using the source code for dist but I can't figure out to strip away all the stuff I dont want without breaking it.
function (x, method = "euclidean", diag = FALSE, upper = FALSE,
p = 2)
{
if (!is.na(pmatch(method, "euclidian")))
method <- "euclidean"
METHODS <- c("euclidean", "maximum", "manhattan", "canberra",
"binary", "minkowski")
method <- pmatch(method, METHODS)
if (is.na(method))
stop("invalid distance method")
if (method == -1)
stop("ambiguous distance method")
x <- as.matrix(x)
N <- nrow(x)
attrs <- if (method == 6L)
list(Size = N, Labels = dimnames(x)[[1L]], Diag = diag,
Upper = upper, method = METHODS[method], p = p,
call = match.call(), class = "dist")
else list(Size = N, Labels = dimnames(x)[[1L]], Diag = diag,
Upper = upper, method = METHODS[method], call = match.call(),
class = "dist")
.Call(C_Cdist, x, method, attrs, p)
}
Is anyone able to point me to a viable first step? I'm really trying to learn how to program without always relying on pre-packaged functions.
You could use outer
df <- data.frame(x = rnorm(100), y = rnorm(100))
outer(df$x, df$y, function(x, y)sqrt((x - y)^2))

MLE - Optimization with constraints as non-linear functions of the variables

I have a problem with the following optimization problem. In particular, I would like to add the following constraint to the MLE problem: (x - location)/scale > 0. Without this constraint, the LL is Inf and the L-BGFS-B optimization gives the following error
library(PearsonDS)
x <- rpearsonIII(n=1000, shape = 5, location = 6, scale = 7)
dpearson3 <- function (x, shape, location, scale, log = FALSE)
{
gscale <- abs(scale)
ssgn <- sign(scale)
density <- dgamma(ssgn * (x - location), shape = shape, scale = gscale, log = log)
return(density)
}
LL3 <- function(theta, x, display)
{
shape <- as.numeric(theta[1])
location <- as.numeric(theta[2])
scale <- as.numeric(theta[3])
tmp <- -sum(log(dpearson3(x, shape, location, scale, log = FALSE)))
if (is.na(tmp)) +Inf else tmp
if(display == 1){print(c(tmp, theta))}
return(sum(tmp))
}
control.list <- list(maxit = 100000, factr=1e-12, fnscale = 1)
fit <- optim(par = param,
fn = LL3,
hessian = TRUE,
method = "L-BFGS-B",
lower = c(0,-Inf,-Inf),
upper = c(Inf,Inf,Inf),
control = control.list,
x = x, display = 1)
Assume that I start the search from
param <- c(100,1000,10), I get the following error
Error in optim(par = param, fn = LL3, hessian = TRUE, method = "L-BFGS-B", :
L-BFGS-B needs finite values of 'fn'
How to solve the issue?
Changing the MLE function to
LL3 <- function(theta, x, display){
shape <- as.numeric(theta[1])
location <- as.numeric(theta[2])
scale <- as.numeric(theta[3])
tmp <- -sum(log(dpearson3(x, shape, location, scale, log = FALSE)))
if(min((x-location)/scale) < 0) tmp = + 100000000000 # I added this line
if (is.na(tmp)) +Inf else tmp
if(display == 1){print(c(tmp, theta))}
return(tmp)
}
is the smartest thing I could find. In this way I avoid the Inf problem. Any better answer?

Passing arguments and default arguments in nested functions

I have a function, x_pdf, that is supposed to calculate x*dfun(x|params) where dfun is a probability density function and params is a list of named parameters. It is defined inside of another function, int_pdf, that is supposed to integrate x_pdf between specified bounds:
int_pdf <- function(lb = 0, ub = Inf, dfun, params){
x_pdf <- function(X, dfun, params){X * do.call(function(X){dfun(x=X)}, params)}
out <- integrate(f = x_pdf, lower=lb, upper=ub, subdivisions = 100L)
out
}
Note that, given my defaults for the lower and upper bound of integration, I expect that when the function is run with only the params specified it will return the mean of x.
I have a second function, int_gb2, that is a wrapper for int_pdf intended to specialize it to the generalized beta disttribution of the second kind.
library(GB2)
int_gb2 <- function(lb = 0, ub = Inf, params){
int_pdf(lb, ub, dfun = dgb2, params = get("params"))
}
When I run the function as follows:
GB2_params <- list(shape1 = 3.652, scale = 65797, shape2 = 0.3, shape3 = 0.8356)
int_gb2(params = GB2_params)
I get:
Error in do.call(what = function(X) { :
argument "params" is missing, with no default
I have spent multiple hours tweaking this, and I have nanaged to generate tome alternative error messages, but always with respect to a missing x, X, or params.
There seems to be two problems here, both related to passing arguments: in the first one there are too many arguments being passed, and in the second one, too few.
First off, in your x_pdf definition, you use an anonymous function that takes a single argument (function(X){dfun(x=X)}), but you also try to pass additional arguments (the params list) to said anonymous function with do.call, which will throw an error. That part should instead look something like this:
do.call(dfun, c(list(x = X), params))
Now, you've defined x_pdf to require 3 arguments: X, dfun, and params; but when you call x_pdf with integrate you're not passing the dfun and params arguments, which again will throw an error. You could get around that by passing dfun and params, too:
integrate(f = x_pdf, lower=lb, upper=ub, subdivisions = 100L, dfun, params)
But perhaps a neater solution would be to just remove the additional arguments from the definition of x_pdf (since dfun and params are already defined in the enclosing environment), for a more compact result:
int_pdf <- function(lb = 0, ub = Inf, dfun, params){
x_pdf <- function(X) X * do.call(dfun, c(list(x = X), params))
integrate(f = x_pdf, lower = lb, upper = ub, subdivisions = 100L)
}
With this definition of int_pdf, everything should work as you expect:
GB2_params <- list(shape1 = 3.652, scale = 65797, shape2 = 0.3, shape3 = 0.8356)
int_gb2(params = GB2_params)
#> Error in integrate(f = x_pdf, lower = lb, upper = ub, subdivisions = 100L):
#> the integral is probably divergent
Oh. Are the example parameters missing a decimal point from the scale argument?
GB2_params$scale <- 6.5797
int_gb2(params = GB2_params)
#> 4.800761 with absolute error < 0.00015
Extra bits
We could also use some functional programming to create a function factory to make it easy to create functions for finding moments other than the first one:
moment_finder <- function(n, c = 0) {
function(f, lb = -Inf, ub = Inf, params = NULL, ...) {
integrand <- function(x) {
(x - c) ^ n * do.call(f, c(list(x = x), params))
}
integrate(f = integrand, lower = lb, upper = ub, ...)
}
}
To find the mean, you would just create a function to find the first moment:
find_mean <- moment_finder(1)
find_mean(dnorm, params = list(mean = 2))
#> 2 with absolute error < 1.2e-05
find_mean(dgb2, lb = 0, params = GB2_params)
#> 4.800761 with absolute error < 0.00015
For variance, you'd have to find the second central moment:
find_variance <- function(f, ...) {
mean <- find_mean(f, ...)$value
moment_finder(2, c = mean)(f, ...)
}
find_variance(dnorm, params = list(mean = 2, sd = 4))
#> 16 with absolute error < 3.1e-07
find_variance(dgb2, lb = 0, params = GB2_params)
#> 21.67902 with absolute error < 9.2e-05
Alternatively we could just generalise further, and find the expected value
of any transformation, rather than just moments:
ev_finder <- function(transform = identity) {
function(f, lb = -Inf, ub = Inf, params = NULL, ...) {
integrand <- function(x) {
transform(x) * do.call(f, c(list(x = x), params))
}
integrate(f = integrand, lower = lb, upper = ub, ...)
}
}
Now moment_finder would be a special case:
moment_finder <- function(n, c = 0) {
ev_finder(transform = function(x) (x - c) ^ n)
}
Created on 2018-02-17 by the reprex package (v0.2.0).
If you've read this far, you might also enjoy Advanced R by Hadley Wickham.
More extra bits
#andrewH I understood from your comment that you might be looking to find means of truncated distributions, e.g. find the mean for the part of the distribution above the mean of the entire distribution.
To do that, it's not enough to just integrate the first moment's integrand up from the mean value: you'll also have to rescale the PDF in the integrand, to make it a proper PDF again, after the truncation (make up for the lost probability mass, if you will, in a "hand wave-y" figure of speech). You can do that by dividing with the integral of the original PDF over the support of the truncated one.
Here's the code to better convey what I mean:
library(purrr)
library(GB2)
find_mass <- moment_finder(0)
find_mean <- moment_finder(1)
GB2_params <- list(shape1 = 3.652, scale = 6.5797, shape2 = 0.3, shape3 = 0.8356)
dgb2p <- invoke(partial, GB2_params, ...f = dgb2) # pre-apply parameters
# Mean value
(mu <- find_mean(dgb2p, lb = 0)$value)
#> [1] 4.800761
# Mean for the truncated distribution below the mean
(lower_mass <- find_mass(dgb2p, lb = 0, ub = mu)$value)
#> [1] 0.6108409
(lower_mean <- find_mean(dgb2p, lb = 0, ub = mu)$value / lower_mass)
#> [1] 2.40446
# Mean for the truncated distribution above the mean
(upper_mass <- find_mass(dgb2p, lb = mu)$value)
#> [1] 0.3891591
(upper_mean <- find_mean(dgb2p, lb = mu)$value / upper_mass)
#> [1] 8.562099
lower_mean * lower_mass + upper_mean * upper_mass
#> [1] 4.800761

multinomial MLE error in R

I am new to R, Trying do MLE using mle2 in bbmle package.
R Code:
rm(list = ls())
library(bbmle)
N <- 100
testmat=rmultinom(N, size=3, prob = c(0.1,0.2,0.8))
LL<- function(s, p){-sum(dmultinom(x=testmat, size = s, prob=p, log = TRUE))}
values.start <- list(3, c(0.1,0.2,0.7))
names(values.start) <- parnames(LL) <- paste0("b",0:1)
mle2(LL, start =values.start)
I keep getting this error
"Error in mle2(LL, start = values.start) :
some named arguments in 'start' are not arguments to the specified log-likelihood function"
I am using mle2, I thought its not needed here. At first I was using "mle"
N <- 100
testmat=t(rmultinom(3, size=3, prob = c(0.1,0.2,0.8)))
LL<- function(s, p1,p2,p3){prob=unlist(as.list(environment()))[2:4]
-sum(dmultinom(x=testmat, size = s, prob=prob, log = TRUE))}
values.start <- list(s=3,p1=0.1,p2=0.2,p3=7)
mle(LL, start =values.start)
which game this error
""Error in dmultinom(x = testmat, size = s, prob = prob, log = TRUE) :
x[] and prob[] must be equal length vectors."
I even edited it as follows
N <- 100
testmat=t(rmultinom(3, size=3, prob = c(0.1,0.2,0.8)))
LL<- function(s=3, p1=0.1,p2=0.2,p3=0.7){
prob=unlist(as.list(environment()))[2:4]
s=unlist(as.list(environment()))[1]
-sum(dmultinom(x=testmat, size = s, prob=prob, log = TRUE))}
mle(LL)
error still persists. Finally I was able to decode the errors, thanks a lot.
library(bbmle)
N <- 1000
X=rmultinom(N,size=3,prob = rep(1/3, 3))
LL <- function( p_1 = 0.1,p_2=0.1,p_3=0.8) {
p <- unlist(as.list(environment()))
-sum(apply(X, MAR = 2, dmultinom, size = NULL, prob = c(p_1,p_2,p_3), log = TRUE))
}
mle(LL,method = "L-BFGS-B", lower = c(-Inf, 0), upper = c(Inf, Inf))
In my current ploblem, I have 5k features, therefore I need to write something like this.
function( p_1 = 0.1,p_2=0.1,p_3=0.8...., p_5000=..)
which not possible. Is there any way out of it?
I was able to do it with mle2. this way
rm(list = ls())
library(bbmle)
N <- 1000
s<-100
X=rmultinom(N,size=s,prob = rep(1/s, s))
LL= function(params){
p <- unlist(as.list(environment()))
minusll = -sum(apply(X, MAR = 2, dmultinom, size = NULL, prob = p, log = TRUE))
return(minusll)
}
values.start<-vector(mode="list", length=s)
values.start <- c(0.02,0.01*rep(98/99,99))
names(values.start) <- parnames(LL)<-paste0("b",1:s)
mle2(LL, start =values.start,vecpar = TRUE, method = "L-BFGS-B", lower = c(rep(0,s)), upper = c(rep(1,s)))
Above I was doing Multinomial MLE parameter estimation for dimension of 100, and 1000 samples. I was able to solve the problem of vector parameters. Now I am having this error
Error in optim(par = c(0.02, 0.0098989898989899, 0.0098989898989899, 0.0098989898989899, :
L-BFGS-B needs finite values of 'fn'
I found out that this error is due to 'fn=Inf', might be due to one of the propabilities becoming zero, therefore fn=-log(0) = Inf. Is there any way to solve this problem?
Thanks for the help.

Resources