optimization using "nlminb" - r

im now performing Location Model using non-parametric smoothing to estimate the paramneters.....one of the smoothed paramater is the lamdha that i have to optimize...
so in that case, i decide to use "nlminb function" to achieve it.....
however, my programing give me the same "$par" value even though it was iterate 150 time and make 200 evaluation (by default)..... which is it choose "the start value as $par" (that is 0.000001 ...... i think, there must be something wrong with my written program....
my programing look like:- (note: w is the parameter that i want to optimize and LOO is
stand for leave-one-out
BEGIN
Myfunc <- function(w, n1, n2, v1, v2, g)
{ ## open loop for main function
## DATA generation
# generate data from group 1 and 2
# for each group: discretise the continuous to binary
# newdata <- combine the groups 1 and 2
## MODEL construction
countError <- 0
n <- nrow(newdata)
for (k in 1:n)
{# open loop for leave-one-out
# construct model based on n-1 object using smoothing method
# classify omitted object
countError <- countError + countE
} # close loop for LOO process
Error <- countError / n # error rate counted from LOO procedure
return(Error) # The Average ERROR Rate from LOO procedure
} # close loop for Myfunc
library(stats)
nlminb(start=0.000001, Myfunc, lower=0.000001, upper=0.999999,
control=list(eval.max=100, iter.max=100))
END
could someone help me......
your concerns and guidances is highly appreciated and really100 needed......
Hashibah,
Statistic PhD Student

In your question, provide a nlminb with a univariate starting value. If you are doing univariate optimisation, it is probably worth looking at optimize. If your function is multivariate, then you need to call nlminb slightly differently.
You need define the objective function such that you provide the parameters to optimize over as a vector which is the first argument. Other inputs to the objective function should be provided as subsequent arguments.
For example (modified from the nlminb help page):
X <- rnbinom(100, mu = 10, size = 10)
hdev <- function(par, x) {
-sum(dnbinom(x, mu = par[1], size = par[2], log = TRUE))
}
nlminb(start = c(9, 12), hdev, x = X)

Related

Generate Random Survival Times From A Hazard Function Applying A Hazard Rate with flexsurv

Please consider the following:
My aim is to draw random survival times from a flexible parametric multi-state model fitted with flexsurvreg (more specifically msfit.flexsurvreg) and applying some hazard ratio (HR, in this example set to 0.2).
I found an example to generate random survival times using any hazard function here. This is also were I apply the HR.
Problem
With the actual data, I receive an error once the HR is below the value of 0.2: Error in uniroot((function(x) { : no sign change found in 1000 iterations.
This does not happen in the reproducible example below.
Questions
Is there another, better way than the one below to draw random survival times while applying a HR?
Can someone indicate why the "no sign change" error may occur and how this can be fixed?
Any help is greatly appreciated!
Minimal reproducible example
# Load package
library(flexsurv)
#> Loading required package: survival
# Load data
data("bosms4")
# Define hazard ratio
hr <- 0.2
# Fit model (weibull)
crwei <- flexsurvreg(formula = Surv(years, status) ~ trans + shape(trans),
data = bosms3, dist = "weibull")
# Create transition matrix
Q <- rbind(c(NA,1,2),c(NA,NA,3), c(NA,NA,NA))
# Capture parameters
pars <- pars.fmsm(crwei, trans=Q, newdata=data.frame(trans=1:3))
# Code from https://eurekastatistics.com/generating-random-survival-times-from-any-hazard-function/ ----
inverse = function(fn, min_x, max_x){
# Returns the inverse of a function for a given range.
# E.g. inverse(sin, 0, pi/2)(sin(pi/4)) equals pi/4 because 0 <= pi/4 <= pi/2
fn_inv = function(y){
uniroot((function(x){fn(x) - y}), lower=min_x, upper=max_x)[1]$root
}
return(Vectorize(fn_inv))
}
integrate_from_0 = function(fn, t){
int_fn = function(t) integrate(fn, 0, t)
result = sapply(t, int_fn)
value = unlist(result["value",])
msg = unlist(result["message",])
value[which(msg != "OK")] = NA
return(value)
}
random_survival_times = function(hazard_fn, n, max_time=10000){
# Given a hazard function, returns n random time-to-event observations.
cumulative_density_fn = function(t) 1 - exp(-integrate_from_0(hazard_fn, t))
inverse_cumulative_density_fn = inverse(cumulative_density_fn, 0, max_time)
return(inverse_cumulative_density_fn(runif(n)))
}
# Run with data ----
random_survival_times(hazard_fn = function(t){crwei$dfns$h(t, pars[[1]][1], pars[[1]][2]) * hr}, n = 100)
#> Error in integrate(fn, 0, t): non-finite function value
# Adapt random_survival time function replacing 0 with 0.1 ----
random_survival_times <- function(hazard_fn, n, max_time=10000){
# Given a hazard function, returns n random time-to-event observations.
cumulative_density_fn = function(t) 1 - exp(-integrate_from_0(hazard_fn, t))
inverse_cumulative_density_fn = inverse(cumulative_density_fn, 0.1, max_time)
return(inverse_cumulative_density_fn(runif(n)))
}
# Run again with data ----
random_survival_times(hazard_fn = function(t){crwei$dfns$h(t, pars[[1]][1], pars[[1]][2]) * hr}, n = 100)
#> Error in uniroot((function(x) {: f() values at end points not of opposite sign
# Adapt inverse adding extendedInt = "yes" ----
inverse <- function(fn, min_x, max_x){
# Returns the inverse of a function for a given range.
# E.g. inverse(sin, 0, pi/2)(sin(pi/4)) equals pi/4 because 0 <= pi/4 <= pi/2
fn_inv <- function(y){
uniroot((function(x){fn(x) - y}), lower=min_x, upper=max_x,
extendInt = "yes" # extendInt added because of error on some distributions: "Error in uniroot((function(x) { : f() values at end points not of opposite sign. Solution found here: https://stackoverflow.com/questions/38961221/uniroot-solution-in-r
)[1]$root
}
return(Vectorize(fn_inv))
}
# Run again with data ----
res <- random_survival_times(hazard_fn = function(t){crwei$dfns$h(t, pars[[1]][1], pars[[1]][2]) * hr}, n = 100)
res[1:5]
#> [1] 1.074281 13.688663 30.896637 159.643827 15.805103
Created on 2022-10-18 with reprex v2.0.2
This method of sampling survival times basically works by sampling a random uniform(0,1) number p and finding x for which the survival probability is p. The uniroot step is used to solve S(x) = p by a numerical search. In your case, it is having difficulty finding a solution after 1000 steps.
I've seen this happen, and fixed by adding, e.g. uniroot(..., maxiter=10000) to tell it to try a bit harder to find the solution. That's always been enough in my tests, though those may be limited. If that doesn't work, I'd advise digging in by hand and examining the survival curve that you are trying to invert - it may be invalid due to some parameter value being extreme.
(This kind of thing is done in the function qgeneric in the flexsurv package, though it borrows a vectorised version of uniroot from the rstpm2 package which is faster.)

Implement a Monte Carlo Simulation Method to Estimate an Integral in R

I am trying to implement a Monte carlo simulation method to estimate an integral in R. However, I still get wrong answer. My code is as follows:
f <- function(x){
((cos(x))/x)*exp(log(x)-3)^3
}
t <- integrate(f,0,1)
n <- 10000 #Assume we conduct 10000 simulations
int_gral <- Monte_Car(n)
int_gral
You are not doing Monte-Carlo here. Monte-Carlo is a simulation method that helps you approximating integrals using sums/mean based on random variables.
You should do something in this flavor (you might have to verify that it's correct to say that the mean of the f output can approximates your integral:
f <- function(n){
x <- runif(n)
return(
((cos(x))/x)*exp(log(x)-3)^3
)
}
int_gral <- mean(f(10000))
What your code does is taking a number n and return ((cos(n))/n)*exp(log(n)-3)^3 ; there is no randomness in that
Update
Now, to get a more precise estimates, you need to replicate this step K times. Rather than using a loop, you can use replicate function:
K <- 100
dist <- data.frame(
int = replicate(K, mean(f(10000)))
)
You get a distribution of estimators for your integral :
library(ggplot2)
ggplot(dist) + geom_histogram(aes(x = int, y = ..density..))
and you can use mean to have a numerical value:
mean(dist$int)
# [1] 2.95036e-05
You can evaluate the precision of your estimates with
sd(dist$int)
# [1] 2.296033e-07
Here it is small because N is already large, giving you a good precision of first step.
I have managed to change the codes as follows. Kindly confirm to me that I am doing the right thing.
regards.
f <- function(x){
((cos(x))/x)*exp(log(x)-3)^3
}
set.seed(234)
n<-10000
for (i in 1:10000) {
x<-runif(n)
I<-sum(f(x))/n
}
I

How to get gap statistic for hierarchical average clustering

I perform a hierarchical cluster analysis based on 'average linkage' In base r, I use
dist_mat <- dist(cdata, method = "euclidean")
hclust_avg <- hclust(dist_mat, method = "average")
I want to calculate the gap statistics to decide optimal number of clusters. I use the 'cluster' library and the clusGap function. Since I can't pass the hclust solution nor specify average hiearchical clustering in the clusGap function, I use these lines:
cluster_fun <- function(x, k) list(cluster = cutree(hclust(dist(x, method = "euclidean"), method="average"), k = k))
gap_stat <- clusGap(cdata, FUN=cluster_fun, K.max=10, B=50)
print(gap_stat)
However, here I can't check the cluster solution. So, my question is - can I be sure that the gap statistic is calculated on the same solution as hclust_avg?
Is there a better way of doing this?
Yes it should be the same. In the clusGap function, it calls the cluster_fun for each k you provided, then calculates the pooled within cluster sum of squares around, as described in the paper
This is the bit of code called inside clusGap that calls your custom function:
W.k <- function(X, kk) {
clus <- if (kk > 1)
FUNcluster(X, kk, ...)$cluster
else rep.int(1L, nrow(X))
0.5 * sum(vapply(split(ii, clus), function(I) {
xs <- X[I, , drop = FALSE]
sum(dist(xs)^d.power/nrow(xs))
}, 0))
}
And from here, the gap statistics is calculated.
You can calculate the gap statistic using some custom code, but for the sake of reproducibility, etc, it might be easier to use this?
Thanhs for solving it. I must say this is good enough solution but you can try below given code as well.
# Gap Statistic for K means
def optimalK(data, nrefs=3, maxClusters=15):
"""
Calculates KMeans optimal K using Gap Statistic
Params:
data: ndarry of shape (n_samples, n_features)
nrefs: number of sample reference datasets to create
maxClusters: Maximum number of clusters to test for
Returns: (gaps, optimalK)
"""
gaps = np.zeros((len(range(1, maxClusters)),))
resultsdf = pd.DataFrame({'clusterCount':[], 'gap':[]})
for gap_index, k in enumerate(range(1, maxClusters)):
# Holder for reference dispersion results
refDisps = np.zeros(nrefs)
# For n references, generate random sample and perform kmeans getting resulting dispersion of each loop
for i in range(nrefs):
# Create new random reference set
randomReference = np.random.random_sample(size=data.shape)
# Fit to it
km = KMeans(k)
km.fit(randomReference)
refDisp = km.inertia_
refDisps[i] = refDisp
# Fit cluster to original data and create dispersion
km = KMeans(k)
km.fit(data)
origDisp = km.inertia_
# Calculate gap statistic
gap = np.log(np.mean(refDisps)) - np.log(origDisp)
# Assign this loop's gap statistic to gaps
gaps[gap_index] = gap
resultsdf = resultsdf.append({'clusterCount':k, 'gap':gap}, ignore_index=True)
return (gaps.argmax() + 1, resultsdf)
score_g, df = optimalK(cluster_df, nrefs=5, maxClusters=30)
plt.plot(df['clusterCount'], df['gap'], linestyle='--', marker='o', color='b');
plt.xlabel('K');
plt.ylabel('Gap Statistic');
plt.title('Gap Statistic vs. K');

How to update code to create a function for calculating Welch's for polynomial trends?

I am trying to reproduce the SPSS output for significance a linear trend among means when equal variances are not assumed.
I have gratefully used code from http://www-personal.umich.edu/~gonzo/coursenotes/file3.pdf to create a function for calculating separate variances, which based on my searching I understand as the “equal variances not assumed” output in SPSS.
My problem/goal:
I am only assessing polynomial orthogonal trends (mostly linear). I want to adapt the code creating the function so that the contrast argument can take pre-made contrast matrices rather than manually specifying the coefficients each time (room for typos!).
… I have tried those exact commands but receive Error in contrast %*% means : non-conformable arguments . I have played around with the code but I can’t get it to work.
Code for creating the function from the notes:
sepvarcontrast <- function(dv, group, contrast) {
means <- c(by(dv, group, mean))
vars <- c(by(dv, group, var))
ns <- c(by(dv, group, length))
ihat <- contrast %*% means
t.denominator <- sqrt(contrast^2 %*% (vars/ns))
t.welch <- ihat/ t.denominator
num.contrast <- ifelse(is.null(dim(contrast)),1,dim(contrast)[1])
df.welch <- rep(0, num.contrast)
if (is.null(dim(contrast))) contrast <- t(as.matrix(contrast))
for (i in 1:num.contrast) {
num <- (contrast[i,]^2 %*% (vars))^2
den <- sum((contrast[i,]^2 * vars)^2 / (ns-1))
df.welch[i] <- num/den
}
p.welch <- 2*(1- pt(abs(t.welch), df.welch))
result <- list(ihat = ihat, se.ihat = t.denominator, t.welch = t.welch,
df.welch = df.welch, p.welch = p.welch)
return(result)
}
I would like to be able to use the function like this:
# Create a polynomial contrast matrix for 5 groups, then save
contr.mat5 <- contr.poly(5)
# Calculate separate variance
sepvarcontrast(dv, group, contrast = contr.mat5)
I have tried those exact commands to see if they would work but receive Error in contrast %*% means : non-conformable arguments.
All suggestions are appreciated! I am still learning how to create a reprex...

Maximize simulated likelihood in R -- is there something obvious I am not getting?

I am trying to maximize a simulated likelihood in discrete choice (Lerman and Manski (1981)) by simulating frequencies and using them as probabilities (which I cannot compute directly). However, R never manages to find any optimum (maximization always yields starting values). As a minimal example, here my code for a very simple probit estimation:
### simulate data
set.seed(5849)
N <- 2000
b.cons <- 8
b.x <- 10
x <- cbind(rep(1, N), runif(N)) #"observed variables"
e <- rnorm(N) # "unobserved error"
k <- runif(N)*10+7 # threshold: something random, but high enough to guarantee some variation in i
t <- x%*%c(b.cons, b.x)+e
i <- 1*(k>t) #participation dummy
### likelihood function
R <- 1000 # number of draws
err <- matrix(rnorm(R*N), N, R) # draw error terms (outside of likelihood function to speed up estimation)
# estimate b.i, sig.i
probit.sim <- function(params, I, K, X) {
part =matrix(NA, N, R)
T = X%*%params%*%rep(1, R) + err
for (i in 1:R) part[,i] = K>T[,i]
pr.i1 = rowSums(part)/R
pr.i1[pr.i1==0] <- 0.001
pr.i1[pr.i1==1] <- 0.999
pr.i0 = 1-pr.i1
llik = t(I)%*%log(pr.i1) + t(1-I)%*%log(pr.i0)
-llik
}
### maximize likelihood
optim(c(1,1), probit.sim, I = i, K = k, X = x)
Is it because the probabilities are not smooth enough? Is there a way to maximize things that are not super smooth? On a graph, the maximum still seems pretty clear to the eye... Or am I missing something completely else?
I am really very much of a beginner, so I thank you in advance for any helpful advice!
(Also any reference that actually goes into the details of how to program such a simulated max likelihood function -- most references I saw remain very much theoretical about it)

Resources