How to estimate mle in functional response - r

So I have been trying to understand on how do you estimate Maximum likelihood estimates in R, here in the Gammarus dataset of package frair, what is by=0.1, a=1.2, h=0.08 and T=40/24, how do you get these values. Can someone please explain it to me ?
with(gammarus, plot(density, eaten,
xlab = "Prey Density", ylab = "No. Prey Eaten"))
x <- with(gammarus, seq(from = min(density), to = max(density),
by = 0.1))
lines(x, rogersII(X = x, a = 1.2, h = 0.08, T = 40/24), col='grey50', lty=2)
lines(x, rogersII(X = x, a = 0.6, h = 0.16, T = 40/24), col='grey50', lty=2)
I am expecting to know about functional response analysis and maximum likelihood estimation in detail.

I don't think you can expect Stack Overflow to teach you "about functional response analysis and maximum likelihood estimation in detail" - that's too broad a topic for a SO question, which is intended to solve a particular programming problem.
library(frair)
data(gammarus)
## adding a bit of noise makes it easier to identify overlapping/repeated points
plot(jitter(eaten) ~ jitter(density), gammarus)
lines(x, rogersII(X = x, a = 1.2, h = 0.08, T = 40/24), col='grey50', lty=2)
lines(x, rogersII(X = x, a = 0.6, h = 0.16, T = 40/24), col='grey50', lty=2)
The parameters used here are examples only, probably derived by visually examining the data (and knowing that the attack rate a corresponds to the initial slope of the functional response curve and 1/h corresponds to the asymptote at high densities: T is used for the exposure time of the experiment).
From ?frair::gammarus:
Total experimental time was 40 hours.
Whoever wrote the example wanted the time units to be in days rather than hours (a has units of 1/time and h has units of time), so they used T = 40/24 as the duration of the experiment (the experimental duration T must be specified for Rogers-type functional responses that allow for depletion, but not for simple Holling-type responses).
To estimate the parameters, you need to use frair_fit; you must provide reasonable starting values, which is one reason for doing the preliminary graph.
ff <- frair_fit(eaten ~ density, gammarus, response = 'rogersII',
start=list(a = 1.2, h = 0.08), fixed=list(T=40/24))
## add line to existing plot
lines(ff, col = 2)
One place you could look for more information on functional responses and MLEs (besides in the published literature) would be here, e.g. chap 3/p. 12, all of chaps 6/7 ...

Related

Geometric distribution with general random variable

I got this exercise for my homework in the "Statistical Theory" course.
We were asked to find a geometric distribution for a random variable, so far this is my code and the graph immediately after it.
Define a discreet random variable starting from the Uniform(0,1)
distribution. Simulate for n=1000 and plot the distribution of it’s
mean as the function of n and the PMF. Add a horizontal line for the
theoretical mean (find it analytically, write your solution in tex,
you may use a known for this distribution formula).
Geometric(p) Choose the p parameter randomly from U(0,1) while writing
your code for a general p. Please avoid “magic numbers” inside the
code.The writing shoud be strictly parametric.
My question is, how can I get a better and more accurate result? My goal is for the blue line to converge as much as possible to the original value of Expected value(Mean).
library(glue)
p = runif(1) # choosing random p
n = 1000
real_avg = 1/p
cum_sum = 0
avg = numeric()
for (i in 1:n) {
cum_sum = cum_sum + ceiling(log(U[i],10)/log(1-p,10))
avg=c(avg,cum_sum / i)
}
plot(1 : n, avg, type = "l", lwd = 2, col = "blue", ylab = glue("Oberved Mean for p={round(p,digits=4)}"),
xlab = "Number of Experiments")
abline(h=real_avg,col="red")
print(glue("p={round(p,4)}"))
print(glue("E[X]={1/p}"))

Fitting a physical model to a specific data using nls: over-parameterization or unidentifiable parameters?

I have somewhat a complex physical model with five unknown parameters to fit, but no success so far.
I used nls2 first to get some estimates for the start values, but then nls, nlxb, and nlsLM all threw the famous "singular gradient error at initial parameter estimates" error.
For the start values for nls2, I extracted them from the literature, so I think that I have good starting values at least for nls2. The parameter estimates extracted from nls2 make quite sense physically as well; however, don't resolve the issue with the singular gradient matrix error.
Since it's a physical model, every coefficient has a physical meaning, and I prefer not to fix any of them.
I should also mention that all five unknown parameters in the model equation are positive and the shape parameter m can go up to 2.
Reading through many posts and trying different solution suggestions, I have come to conclusion that I have either over-parameterization or unidentifiable parameters problem.
My question is that should I stop trying to use nls with this specific model (with this many unknown parameters) or is there any way out?
I am quite new to topic, so any help, mathematically or code-wise, is greatly appreciated.
Here is my MWE:
# Data
x <- c(0, 1000, 2000, 2500, 2750, 3000, 3250, 3500, 3750, 4000, 5000)
y <- c(1.0, 0.99, 0.98, 0.95, 0.795, 0.59, 0.35, 0.295, 0.175, 0.14, 0.095)
# Start values for nls2
bounds <- data.frame(a = c(0.8, 1.5), b = c(1e+5, 1e+7), c = c(0.4, 1.4), n = c(0.1, 2), m = c(0.1, 2))
# Model equation function
mod <- function(x, a, b, c, n, m){
t <- b*85^n*exp(-c/0.0309)
(1 - exp(-(a/(t*x))^m))
}
# # Model equation
# mod <- y ~ (1 - exp(-(a/(b*85^n*exp(-c/0.0309)*x))^m))
# Model fit with nls2
fit2 <- nls2(y ~ mod(x, a, b, c, n, m), data = data.frame(x, y), start = bounds, algorithm = "brute-force")
# Model fit with nls
fit <- nls(y ~ mod(x, a, b, c, n, m), data = data.frame(x, y), start = coef(fit2))
The more I look at this the more confused I get, but I'm going to try again.
Looking again at your expression, we have the expression inside the exponential
-(a/(b*85^n*exp(-c/0.0309)*x))^m
We can rewrite this as
-( [a/(b*85^n*exp(-c/0.0309))] * 1/x )^m
(please check my algebra!)
If this is correct, then that whole bold-faced blob doesn't affect the functional form of x — it all collapses to a single constant in the equation. (In other words, {a,b,c,n} are all jointly unidentifiable.) Lumping that stuff into a single parameter phi :
1 - exp(-(phi/x)^m)
phi is a shape parameter (has the same units as x, should be roughly the same magnitude as a typical value of x): let's try a starting value of 2500 (the mean value of x)
m is a shape parameter; we can't go too badly wrong starting from m==1
Now nls works fine without any extra help:
n1 <- nls(y~1 - exp(-(phi/x)^m), start=list(phi=2500,m=1), data=data.frame(x,y))
and gets phi=2935, m=6.49.
Plot predictions:
plot(x,y, ylim=c(0,1))
xvec <- seq(0, 5000, length=101)
lines(xvec, predict(n1, newdata=data.frame(x=xvec)))
Another way to think about what this curve is doing: we can transform the equation to -log(1-y) = phi^m*(1/x)^m: that is, -log(1-y) should follow a power-law curve with respect to 1/x.
Here's what that looks like:
plot(1/x, -log(1-y))
## curve() uses "x" as the current x-axis variable, i.e.
## read "x" as "1/x" below.
with(as.list(coef(n1)), curve(phi^m*x^m, add=TRUE))
In this format, it appears to fit the central data well but fails for large values of 1/x (the x=0 point is missing here because it goes to infinity).

Difference between prop.table() & dnorm()

Could someone explain why the following two plots yield different results:
prop.table(table(S)) [where 'S' is the Random variable...representing Roulette wheel outcomes in this case]
dnorm([a list of values over the range of S], mean(S), sd(S))
Here is my code Snippet:
Frequency Plot of Random Variable (S)
plot(prop.table(table(S)), xlab = "Net Profit", ylab = "Probability", type = "h")
base <- seq(min(S),max(S),length = B)
pdf = data.frame(profit = base, probability = dnorm(base,avg,sd))
lines(pdf)
I can't upload pictures of my plot because of inadequate reputation
However, the 'line-plot' peak is about half of the 'prop.table(table(S))' plot
Cold you clear my understanding?
prop.table(Table(S)) gives us the probability of a value occurring ( as given by the value's frequency of occurrence)
dnorm(value,mean,std) gives us the probability of a value occurring (as given by the normal distribution )
if both are the probability of the same thing, shouldn't the peaks overlap, as shown in the video
Thanks in advance :D
Update:
Here is the exact code I'm using:
set.seed(1)
plays <- 1000
B <- 10000
#Monte Carlo Sim for Roulette Wheel
S <- replicate(B,{ # S because Random Variable
sum(sample(c(-1,1), plays, replace = TRUE, prob = c(18/38,20/38)))
# -1 -> Casino loose bet ; 1 -> Casino win bet
})
avg = mean(S); sd = sd(S)
# Frequency Plot of Random Variable of R. Wheel outcome
plot(prop.table(table(S)), xlab = "Net Profit", ylab = "Probability", type = "h")
base <- seq(min(S),max(S),length = B)
pdf = data.frame(profit = base, probability = dnorm(base,avg,sd))
lines(pdf)
A probability density is not a probability. It is a probability per unit of something.
Your sample, S, is only ever going to be divisible by 2, since the outcome is either -1 or 1. When you tabulate, you'll notice this. Then prop.table returns the proportion or probabilities of those values (-2, 0, 2, 4, 6, ...). These are discrete values, not continuous.
dnorm returns the density for a given normal ditribution. So if you want to use dnorm to emulate a probability, you need to multiply it by the per unit. In this case, 2 - the width of the histogram bars.
pdf2 = data.frame(profit = base, probability = dnorm(base,avg,sd) * 2)
lines(pdf2, col="blue", lwd=2)

Unexpected behavior in spatstat inhomogeneous K-, F- and G-functions

I have a point pattern with about 84,000 points. Quadrat tests suggested inhomogeneous intensity to I tried different Kernel bandwidths and got very odd behavior in the inhomogeneous implementations of the K-, F- and G-functions. Here is an example of the inhomogeneous F-function plot. Clearly, the estimated F-function does not reach 1 within the distance range while the Poisson process just flatlines. The F-function should also be increasing so the dips are odd. When manually specifying a longer range of r in the Finhom() function, the function still does not evaluate beyond the suggested range of 2000.
Unfortunately, I cannot share my data. However, I managed to reproduce some of the errors with an admittedly very simple example of a point pattern on the unit square:
library(spatstat) # version 1.57-1
# define point pattern
ex <- as.ppp(data.frame(x = c(.9, .25, .29, .7, .72, .8, .72, .85),
y = c(.1, .25, .29, .5, .5, .1, .45, .08)),
W = owin(c(0,1), c(0,1)))
plot(ex)
# testing inhomogeneity
quadrat.test(ex, 3, 3, method = "M", nsim = 500) # p around 0.05
# set bandwidth
diggle <- bw.diggle(ex)
# suggested bandwidth of 0.028
# estimate inhomogeneous F-function
Fi <- Finhom(ex, sigma = diggle)
plot(Fi, main ="Finhom for ex pattern")
The plot is attached here. Similar to my real data, the plot stops evaluating at r = 0.5, flatlines and does not go up all the way to 1.
Interestingly, when supplying the intensity directly via the lambda argument in the Finhom() function, the behavior changes:
lambda_ex <- density(ex, sigma = diggle, at = "points")
Fi_lambda <- Finhom(ex, lambda = lambda_ex)
plot(Fi_lambda, main ="Finhom w/ lambda directly")
Here, the functions behave as expected.
My questions are:
why is there a difference between directly supplied intensity vs. intensity internally estimated in the Finhom() function?
what could be the reason for the odd behavior of the F-function here? A code issue or user error? (Sidenote, the G- and K-functions also return odd behavior, to keep this question short-ish, I've focused on the F-function)
Thank you!
As pointed out by Adrian Baddeley in the other answer this is not a bug in Finhom per se. You would expect that
Fi <- Finhom(ex, sigma = diggle)
should be equivalent to
lambda_ex <- density(ex, sigma = diggle, at = "points")
Fi_lambda <- Finhom(ex, lambda = lambda_ex)
However, different values of the argument lmin are implied by these commands. In the first case lambda is estimated everywhere in the window and the minimum value used. In the second case only the given values of lambda are used to find the minimum. That can of course be quite different. The importance of lmin is illustrated in the code below (note that discrepancy between data and inhomogeneous Poisson is of the same type in all cases).
The other part about the estimate stopping at r=0.5 is not surprising since border correction is used and the window is the unit square. When r=0.5 the entire window is "shaved off", so there is no data left.
library(spatstat)
#> spatstat 1.56-1.031 (nickname: 'Psycho chicken')
X <- swedishpines
lam <- density(X, at = "points", sigma = 10)
lam_min <- min(lam)
plot(Finhom(X, lmin = lam_min), legend = FALSE, col = 1, main = "Finhom for different values of lmin")
s <- 2^(1:3)
for(i in seq_along(s)){
plot(Finhom(X, lmin = lam_min/s[i]), col = i+1, add = TRUE)
}
s <- c(1,s)
legend("topleft", legend = paste0("min(lam)/", s), lty = 1, col = 1:length(s))
Created on 2018-11-24 by the reprex package (v0.2.1)
The "inhomogeneous" functions Kinhom, Ginhom, Finhom involve making adjustments for the spatially varying intensity of the point process. They only work if (a) the intensity has been accurately estimated, and (b) the point process satisfies certain technical assumptions which justify the adjustment calculation (see the references in the help files, or the relevant section of the spatstat book).
The plot of density(ex, sigma=bw.diggle) shows very high peaks and very low troughs in the estimated intensity, suggesting that the data are under-smoothed, so that (a) is not satisfied. The results obtained with bw.scott or bw.CvL are much better behaved. (Remember that bw.diggle is designed for clustered patterns.) For example, I get a reasonably nice plot with
plot(Finhom(ex, sigma=bw.CvL))
Yes, it does seem a bit disconcerting that the results are different when 'lambda' is given as a pixel image and as a numeric vector. This occurs, as Ege explains, because of the different rules for calculating the default value of the important argument lmin. It's not really a bug -- the original authors of the code for Ginhom and Finhom designed it this way; I will consult them for advice about whether we should change it. In the meantime, you can make the two calculations agree if you specify the value of lmin.

Fitting truncated normal distribution in R

I'm trying to fit a truncated normal distribution to data using fitdistrplus::fitdistr and specifying upper and lower bounds. However, when comparing the MLE-fitted parameters to those of an MLE-fit without bounds, they seem to be the same.
library(fitdistrplus)
library(MASS)
dt <- rnorm(100, 1, 0.5)
cat("truncated:", fitdistr(dt, "normal", lower = 0, upper = 1.5, method = "mle")$estimate,
"original:", fitdist(dt, "norm", method = "mle")$estimate, sep = "\n")
truncated:
1.034495
0.4112629
original:
1.034495
0.4112629
I'm not a statistics genius, but I'm pretty sure that parameters should be different because truncating the distribution, both mean and sd will change (because the distribution is rescaled). Is this right?
Thanks for your advice
Cheers,
Simon

Resources