My intention was to write several functions aimed at finding the overall similarity between two covariance matrices, either by multiplying them with random vectors and correlating the response vectors or by bootstrapping one of the matrices to obtain the correlation coefficient distribution that can serve for comparison. But in both cases I got erroneous results. The observed between-matrix correlation was high up to 0.93, but the distribution only ranged up to 0.2 the most. This is the function`s code:
resamplerSimAlt <- function(mat1, mat2, numR, graph = FALSE)
{
statSim <- numeric(numR)
mat1vcv <- cov(mat1)
mat2vcvT <- cov(mat2)
ltM1 <- mat1vcv[col(mat1vcv) <= row(mat1vcv)]
ltM2T <- mat2vcvT[col(mat2vcvT) <= row(mat2vcvT)]
statObs <- cor(ltM1, ltM2T)
indice <- c(1:length(mat2))
resamplesIndices <- lapply(1:numR, function(i) sample(indice, replace = F))
for (i in 1:numR)
{
ss <- mat2[sample(resamplesIndices[[i]])]
ss <- matrix(ss, nrow = dim(mat2)[[1]], ncol = dim(mat2)[[2]])
mat2ss <- cov(ss)
ltM2ss <- mat2ss[col(mat2ss) <= row(mat2ss)]
statSim[i] <- cor(ltM1, ltM2ss)
}
if (graph == TRUE)
{
plot(1, main = "resampled data density distribution", xlim = c(0, statObs+0.1), ylim = c(0,14))
points(density(statSim), type="l", lwd=2)
abline(v = statObs)
text(10, 10, "observed corelation = ")
}
list( obs = statObs , sumFit = sum(statSim > statObs)/numR)
}
In fact it is hard for me to belive that correlation coefficient between two original matrices is high, and the one between the first original matrix and the second re-sampled one is maximal 0.2 after 10000 bootstrap repetitions.
Any comments on the validity of the code?
Sorry, I am not enough educated to catch up your goals about checking the correlation efficient between two covariance matrices, but I tried to apprehend your code per se.
If I am right, you are making up 10.000 different matrices from the same matrix (mat2) by reordering the cells all round, and recomputing the correlation between the covariance matrix of mat1 and the covariance matrix of the resampled array. Those are stored in the statSim variable.
You said the original correaltion efficient was high (statObs), but the maximum of statSim was low, which is strange. I think the problem is with your result list:
list( obs = statObs , sumFit = sum(statSim > statObs)/numR)
Where you return the original correaltion coefficient (obs), but not the written maximum with sumFit. There you might use eg. max(statSim). I see the point in returning sumFit for checking if the resampling did any improvement to the correlation efficient, but based on your code, I see no problem about the theory.
Updated function with max of simulated correlation coefficients:
resamplerSimAlt <- function(mat1, mat2, numR, graph = FALSE)
{
statSim <- numeric(numR)
mat1vcv <- cov(mat1)
mat2vcvT <- cov(mat2)
ltM1 <- mat1vcv[col(mat1vcv) <= row(mat1vcv)]
ltM2T <- mat2vcvT[col(mat2vcvT) <= row(mat2vcvT)]
statObs <- cor(ltM1, ltM2T)
indice <- c(1:length(mat2))
resamplesIndices <- lapply(1:numR, function(i) sample(indice, replace = F))
for (i in 1:numR)
{
ss <- mat2[sample(resamplesIndices[[i]])]
ss <- matrix(ss, nrow = dim(mat2)[[1]], ncol = dim(mat2)[[2]])
mat2ss <- cov(ss)
ltM2ss <- mat2ss[col(mat2ss) <= row(mat2ss)]
statSim[i] <- cor(ltM1, ltM2ss)
}
if (graph == TRUE)
{
plot(1, main = "resampled data density distribution", xlim = c(0, statObs+0.1), ylim = c(0,14))
points(density(statSim), type="l", lwd=2)
abline(v = statObs)
text(10, 10, "observed corelation = ")
}
list( obs = statObs , sumFit = sum(statSim > statObs)/numR, max=max(statSim))
}
What I had run:
> mat1 <- matrix(runif(25),5,5)
> mat2 <- mat1+0.2
> resamplerSimAlt(mat1, mat2, 10000)
$obs
[1] 1
$sumFit
[1] 0
$max
[1] 0.94463
And with random mat2:
> mat2 <- matrix(runif(25),5,5)
> resamplerSimAlt(mat1, mat2, 10000)
$obs
[1] 0.31144
$sumFit
[1] 0.9124
$max
[1] 0.9231
My answer might not be a real answer. If that would be the case, please give more details about the problem :)
Related
I have a function that generates a sine wave
my.sin <- function(vec,a,f,p) a*sin(f*vec+p)
vec = vector indices
a = amplitude
f = frequency
p = phase
I also have some data my_var that I want to approximate with several sinusoids
set.seed(22)
my_var <- rnorm(100)
plot(my_var,t="l")
There is also a fitness function that calculates the approximation error of the sum of two sinusoids,but there can be any number of sinusoids
fit <- function(x,test=F){
vec <- 1:length(my_var)
s1 <- my.sin(vec = vec,a = x[1],f = x[2],p = x[3])
s2 <- my.sin(vec = vec,a = x[4],f = x[5],p = x[6])
res_sin <- s1+s2
err <- sqrt(sum((res_sin - my_var) ^ 2))
if(test) return(res_sin)
return( err/-1 )
}
Next, I use a genetic algorithm to find the best solution.
library(GA)
GA <- ga("real-valued",
fit,
lower = rep(-5,6),
upper = rep( 5,6),
maxiter = 300,
popSize = 300)
sol <- tail(GA#solution,1)[1,]
ga_sin <- fit(sol,test = T)
lines(ga_sin,col=4,lwd=2)
best_sin_comb <- matrix(sol,ncol = 3,byrow = T)
colnames(best_sin_comb) <- c("amplitude","frequency","phase")
print(best_sin_comb)
result
amplitude frequency phase
[1,] -0.3435402 1.5458888 1.8904578
[2,] -0.4326791 -0.4886035 0.5606401
My question is: can the approximation be made more efficient in terms of time spent. Perhaps a different search algorithm or something else ..
Also I would like to keep compatibility with the function my.sin
I did simulations in R and plotted the results in histograms. There is no problem adding probability density into the histograms with the density() function. But for some reason I get very strange results when I plot the theoretical densities to the histograms for comparison purposes. Here are two example codes and two pictures. The blue theoretical pdfs are way off and I do not know why. Could someone with R skills point out my mistakes?
# generating 10000 samples thrice from U(0,1) distribution
# and sorting them for the statistics:
n <- 10000
samples1 <- data.frame('0'=c(rep(NA,4)))
samples2 <- data.frame('0'=c(rep(NA,10)))
samples3 <- data.frame('0'=c(rep(NA,10)))
for (i in 1:n) {
new <- runif(4)
samples1[ , ncol(samples1) + 1] <- sort(new)
colnames(samples1)[ncol(samples1)] <- i
new <- runif(10)
samples2[ , ncol(samples2)+1] <- sort(new)
colnames(samples2)[ncol(samples2)] <- i
new <- runif(10)
samples3[ , ncol(samples3)+1] <- sort(new)
colnames(samples3)[ncol(samples3)] <- i
}
# dropping the first (useless) columns:
samples1 <- samples1[-c(1)]
samples2 <- samples2[-c(1)]
samples3 <- samples3[-c(1)]
# selecting the statistics from the samples:
# X_2:4
stat24 <- rep(NA,n)
for (i in 1:n) {
stat24[i] <- samples1[2,i]
}
# X_2:10
stat210 <- rep(NA,n)
for (i in 1:n) {
stat210[i] <- samples1[2,i]
}
# X_10:10
stat1010 <- rep(NA,n)
for (i in 1:n) {
stat1010[i] <- samples1[10,i]
}
# plotting the histograms and Beta pdfs:
hist(stat24, freq = FALSE)
lines(dbeta(stat24, 2, 5), col='blue')
lines(density(stat24), col='red')
Distribution of the kth statistic follows Beta(k, n+k-1) distribution which appears as the odd blue stroke on the right.
n <- 10000
random_variable_F <- rep(NA,n)
# generating 10000 samples of sizes 10 and 5 and computing F:
for (i in 1:n) {
x <- rnorm(10, mean = 10, sd = sqrt(5))
y <- rnorm(5, mean = 20, sd = sqrt(10))
random_variable_F[i] <- ((var(x))*5)/((var(y)*10))
}
#head(random_variable_F)
# plotting the histogram:
hist(random_variable_F, freq = F)
lines(density(random_variable_F), col='red')
lines(df(random_variable_F, 9, 4,), col='blue')
Random variable F follows F-distribution. Paramaters are the sample sizes minus one, in this case 10-1=9 and 5-1=4. the theoretical curve is quite wild:
If you pass a single vector to lines, it assumes that this is a vector of y values you want to plot. It plots the first y value at x = 1, the second y value at x = 2, etc, all the way up to x = length(y). In your case, random_variable_F is an unordered random variable, and you are just plotting its sequential values at 1:10000 along the x axis.
Clearly, you want the function y = df(x) to be plotted, so you need to pass random_variable_F as the x values and df(random_variable_F) as the y values. You will also need to sort random_variable_F first to ensure the line is plotted from left to right:
hist(random_variable_F, freq = F)
lines(density(random_variable_F), col='red')
lines(sort(random_variable_F), df(sort(random_variable_F), 9, 4), col='blue')
Note that this doesn't happen when you plot lines(density(random_variable_F)) because density produces a list containing ordered x and y valued rather than a vector.
I have two diffent columns
a = c(65.96924, 7.084438, 81.65288 )
b= c(2197.62974660844, 1934.40212085843, 1939.64438773692,
)`
They are different, but I wonder why I go similar output when applying:
quants <- seq(0, 1, length.out = 51)
ecdf(a)(quantile(a, quants))
ecdf(b)(quantile(b, quants))
This can be better understood if you calculate the ecdf by hand instead of using the ecdf() function, which is just a step function that increases up by 1/n at each of the n data points. Since both are the same length you get the same result.
quants <- seq(0, 1, length.out = 51)
A <- sort(a)
B <- sort(b)
A_ecdf <- 1:length(A)/length(A)
B_ecdf <- 1:length(B)/length(B)
plot(A, A_ecdf, type = "s", col = 1)
par(new = TRUE)
plot(B, B_ecdf, type = "s", col = 2)
Though I am not sure exactly what you are trying to do, a possible solution from How to find quantiles of an empirical cumulative density function (ECDF) may be:
my_quantile <- function(x, prob) {
n <- length(x)
approx(seq(0, 1, length = n), x, prob)$y
}
my_quantile(A, quants)
my_quantile(B, quants)
You are calculating the Empirical Cumulative Distribution Function of the quantiles itself and not of the distribution e.g. ecdf(b)(2000). By definition, you end up with a straight line.
So I have a dataset with 600 points, their latitude, longitude, and demands.
I have to make clusters such that for each cluster the points will be near each other and the total capacity of that cluster will not exceed a certain limit.
A sample dataset for the problem:
set.seed(123)
id<- seq(1:600)
lon <- rnorm(600, 88.5, 0.125)
lat <- rnorm(600, 22.4, 0.15)
demand <- round(rnorm(600, 40, 20))
df<- data.frame(id, lon, lat, demand)
What I want to have approximately:
What I am getting (cluster boundaries are approximated):
The code I've written:
library(tidyverse)
constrained_cluster <- function(df,capacity=170){
lon_max <- max(df$lon)
lat_max <- max(df$lat)
#Calculating the distance between an extreme point and all other points
df$distance<-6377.83*acos(sin(lat_max*p)*sin(df$lat*p) + cos(lat_max*p)*cos(df$lat*p) * cos((lon_max-df$lon)*p))
df<- df[order(df$distance, decreasing = FALSE),]
d<-0
cluster_number<-1
cluster_list<- c()
i<-1
#Writing a loop to form the cluster which will fill up the cluster_list accordingly
while (i <= length(df$distance)){
d <- d+ df$demand[i]
if(d<=capacity){
cluster_list[i] <- cluster_number
i<- i+1
}
else{
cluster_number <- cluster_number+1
d <- 0
i<-i
}
}
#Return a dataframe with the list of clusters
return(cbind(df,as.data.frame(cluster_list)))
}
df_with_cluster<- constrained_cluster(df, capacity = 1000)
Here is one possible approach, in which I treat the problem directly as an optimisation problem.
Suppose you have a feasible partition of the rows into groups. Not necessarily a good one, but one that does not violate the constraints. For every
group (cluster), you compute the centre. Then you
compute the distances of all points in a group to the
group's centre, and sum them. In this way, you have a
measure of quality of your initial partition.
Now, randomly pick on row, and move it into another
group. You get new solution. Complete the steps
as before, and compare the new solution's quality with
the previous one. If it's better, keep it. If it's
worse, stay with the old solution. Now repeat this
whole procedure for a fixed number of iterations.
This process is called a Local Search. Of course, it is
not guaranteed it will take you to an optimum
solution. But it will likely give you a good
solution. (Note that k-means implementations are
typically stochastic as well, and there is no guaranty
for an "optimal" partition.)
The good thing about a Local Search is that it gives
you much flexibility. For instance, I assumed you
started with a feasible solution. Suppose you make a
random move (i.e. move one row into another cluster),
but now this new cluster is to big. You could now simply discard this new, infeasible solution, and draw a new one.
Here is a code example, really just an outline; but with luck it is useful for you.
set.seed(123)
id<- seq(1:600)
lon <- rnorm(600, 88.5, 0.125)
lat <- rnorm(600, 22.4, 0.15)
demand <- round(rnorm(600, 40, 20))
df<- data.frame(id, lon, lat, demand)
Fix a number of clusters, k.
k <- 5
Start with kmeans and plot the solution.
par(mfrow = c(1, 3))
km <- kmeans(cbind(df$lat, df$lon), centers = k)
cols <- hcl.colors(n = k, "Cold")
plot(df$lon,
df$lat,
type = "p", pch = 19, cex = 0.5,
main = "kmeans")
for (i in seq_len(k)) {
lines(df$lon[km$cluster == i],
df$lat[km$cluster == i],
type = "p", pch = 19,
col = cols[i])
}
Now a Local Search. I use an implementation in package NMOF (which I maintain).
library("NMOF")
## a random initial solution
x0 <- sample(1:k, length(id), replace = TRUE)
X <- as.matrix(df[, 2:3])
The objective function: it takes a partition x and computes the sum of distances, for all clusters.
sum_diff <- function(x, X, k, ...) {
groups <- seq_len(k)
d_centre <- numeric(k)
for (g in groups) {
centre <- colMeans(X[x == g, ], )
d <- t(X[x == g, ]) - centre
d_centre[g] <- sum(sqrt(colSums(d * d)))
}
sum(d_centre)
}
The neighbourhood function: it takes a partition and moves
one row into another cluster.
nb <- function(x, k, ...) {
groups <- seq_len(k)
x_new <- x
p <- sample.int(length(x), 1)
g_ <- groups[-x_new[p]]
x_new[p] <- g_[sample.int(length(g_), 1)]
x_new
}
Run the Local Search. I actually use a method called Threshold Accepting, which is based on Local Search, but can move away from local minima. See ?NMOF::TAopt for references on that method.
sol <- TAopt(sum_diff,
list(x0 = x0,
nI = 20000,
neighbour = nb),
X = as.matrix(df[, 2:3]),
k = k)
Plot the solution.
plot(df$lon,
df$lat,
type = "p", pch = 19, cex = 0.5,
main = "Local search")
for (i in seq_len(k)) {
lines(df$lon[sol$xbest == i],
df$lat[sol$xbest == i],
type = "p", pch = 19,
col = cols[i])
}
Now, one way to add a capacity constraint. We start with a feasible solution.
## CAPACITY-CONSTRAINED
max.demand <- 6600
all(tapply(df$demand, x0, sum) < max.demand)
## TRUE
The constraint is handled in the neighbourhood. If the new solution exceeds the capacity, it is discarded.
nb_constr <- function(x, k, demand, max.demand,...) {
groups <- seq_len(k)
x_new <- x
p <- sample.int(length(x), 1)
g_ <- groups[-x_new[p]]
x_new[p] <- g_[sample.int(length(g_), 1)]
## if capacity is exceeded, return
## original solution
if (sum(demand[x_new == x_new[p]]) > max.demand)
x
else
x_new
}
Run the method and compare the results.
sol <- TAopt(sum_diff,
list(x0 = x0,
nI = 20000,
neighbour = nb_constr),
X = as.matrix(df[, 2:3]),
k = k,
demand = df$demand,
max.demand = max.demand)
plot(df$lon,
df$lat,
type = "p", pch = 19, cex = 0.5,
main = "Local search w/ constraint")
for (i in seq_len(k)) {
lines(df$lon[sol$xbest == i],
df$lat[sol$xbest == i],
type = "p", pch = 19,
col = cols[i])
}
all(tapply(df$demand, sol$xbest, sum) < max.demand)
## TRUE
This is really just an example and could be improved. For instance, the objective function here recomputes the distance of all groups, when it would only need to look at the changed groups.
Something like this might get you started?
nmax <- 100
num.centers <- 1
km <- kmeans(cbind(df$lat, df$lon), centers = num.centers)
#check if there are no clusters larger than nmax
while (prod(km$size < nmax) == 0) {
num.centers <- num.centers + 1
km <- kmeans(cbind(df$lat, df$lon), centers = num.centers)
}
plot(df$lon, df$lat, col = km$cluster, pch = 20)
I have written a custom likelihood function that fits a multi-data model that integrates mark-recapture and telemetry data (sensu Royle et al. 2013 Methods in Ecology and Evolution). The likelihood function is designed to be flexible in terms of whether and how many covariates are specified for different linear models in different likelihood components which is determined by values supplied as function arguments (i.e., data matrices "detcovs" and "dencovs" in my code). The likelihood function works when I directly supply it to optimization functions (e.g., optim or nlm), but does not play nice with the mle2 function in the bbmle package. My problem is that I continually run into the following error: "some named arguments in 'start' are not arguments to the specified log-likelihood function". This is my first attempt at writing custom likelihood functions so I'm sure there are general coding conventions of which I'm unaware that make such tasks much more efficient and amendable to the mle2 function. Below is my likelihood function, code creating the staring value objects, and code calling the mle2 function. Any advice how to solve the error problem and general comments on writing cleaner functions is welcome. Many thanks in advance.
Edit: As requested, I have simplified the likelihood function and provided code to simulate reproducible data to which the model can be fit. Included in the simulation code are 2 custom functions and use of the raster function from the raster package. Hopefully, I have sufficiently simplified everything to enable others to troubleshoot. Again, many thanks for your help!
Jared
Likelihood function:
CSCR.RSF.intlik2.EXAMPLE <- function(alpha0,sigma,alphas=NULL,betas=NULL,n0,yscr=NULL,K=NULL,X=X,trapcovs=NULL,Gden=NULL,Gdet=NULL,ytel=NULL,stel=NULL,
dencovs=NULL,detcovs=NULL){
#
# this version of the code handles a covariate on log(Density). This is starting value 5
#
# start = vector of starting values
# yscr = nind x ntraps encounter matrix
# K = number of occasions
# X = trap locations
# Gden = matrix with grid cell coordinates for density raster
# Gdet = matrix with gride cell coordinates for RSF raster
# dencovs = all covariate values for all nGden pixels in density raster
# trapcovs = covariate value at trap locations
# detcovs = all covariate values for all nGrsf pixels in RSF raster
# ytel = nguys x nGdet matrix of telemetry fixes in each nGdet pixels
# stel = home range center of telemetered individuals, IF you wish to estimate it. Not necessary
# alphas = starting values for RSF/detfn coefficients excluding sigma and intercept
# alpha0 = starting values for RSF/detfn intercept
# sigma = starting value for RSF/detfn sigma
# betas = starting values for density function coefficients
# n0 = starting value for number of undetected individuals on log scale
#
n0 = exp(n0)
nGden = nrow(Gden)
D = e2dist(X,Gden)
nGdet <- nrow(Gdet)
alphas = alphas
loglam = alpha0 -(1/(2*sigma*sigma))*D*D + as.vector(trapcovs%*%alphas) # ztrap recycled over nG
psi = exp(as.vector(dencovs%*%betas))
psi = psi/sum(psi)
probcap = 1-exp(-exp(loglam))
#probcap = (exp(theta0)/(1+exp(theta0)))*exp(-theta1*D*D)
Pm = matrix(NA,nrow=nrow(probcap),ncol=ncol(probcap))
ymat = yscr
ymat = rbind(yscr,rep(0,ncol(yscr)))
lik.marg = rep(NA,nrow(ymat))
for(i in 1:nrow(ymat)){
Pm[1:length(Pm)] = (dbinom(rep(ymat[i,],nGden),rep(K,nGden),probcap[1:length(Pm)],log=TRUE))
lik.cond = exp(colSums(Pm))
lik.marg[i] = sum( lik.cond*psi )
}
nv = c(rep(1,length(lik.marg)-1),n0)
part1 = lgamma(nrow(yscr)+n0+1) - lgamma(n0+1)
part2 = sum(nv*log(lik.marg))
out = -1*(part1+ part2)
lam = t(exp(a0 - (1/(2*sigma*sigma))*t(D2)+ as.vector(detcovs%*%alphas)))# recycle zall over all ytel guys
# lam is now nGdet x nG!
denom = rowSums(lam)
probs = lam/denom # each column is the probs for a guy at column [j]
tel.loglik = -1*sum( ytel*log(probs) )
out = out + tel.loglik
out
}
Data simulation code:
library(raster)
library(bbmle)
e2dist <- function (x, y){
i <- sort(rep(1:nrow(y), nrow(x)))
dvec <- sqrt((x[, 1] - y[i, 1])^2 + (x[, 2] - y[i, 2])^2)
matrix(dvec, nrow = nrow(x), ncol = nrow(y), byrow = F)
}
spcov <- function(R) {
v <- sqrt(nrow(R))
D <- as.matrix(dist(R))
V <- exp(-D/2)
cov1 <- t(chol(V)) %*% rnorm(nrow(R))
Rd <- as.data.frame(R)
colnames(Rd) <- c("x", "y")
Rd$C <- as.numeric((cov1 - mean(cov1)) / sd(cov1))
return(Rd)
}
set.seed(1234)
co <- seq(0.3, 0.7, length=5)
X <- cbind(rep(co, each=5),
rep(co, times=5))
B <- 10
co <- seq(0, 1, length=B)
Z <- cbind(rep(co, each=B), rep(co, times=B))
dencovs <- cbind(spcov(Z),spcov(Z)[,3]) # ordered as reading raster image from left to right, bottom to top
dimnames(dencovs)[[2]][3:4] <- c("dencov1","dencov2")
denr.list <- vector("list",2)
for(i in 1:2){
denr.list[[i]] <- raster(
list(x=seq(0,1,length=10),
y=seq(0,1,length=10),
z=t(matrix(dencovs[,i+2],10,10,byrow=TRUE)))
)
}
B <- 20
co <- seq(0, 1, length=B)
Z <- cbind(rep(co, each=B), rep(co, times=B))
detcovs <- cbind(spcov(Z),spcov(Z)[,3]) # ordered as reading raster image from left to right, bottom to top
dimnames(detcovs)[[2]][3:4] <- c("detcov1","detcov2")
detcov.raster.list <- vector("list",2)
trapcovs <- matrix(0,J,2)
for(i in 1:2){
detr.list[[i]] <- raster(
list(x=seq(0,1,length=20),
y=seq(0,1,length=20),
z=t(matrix(detcovs[,i+2],20,20,byrow=TRUE)))
)
trapcovs[,i] <- extract(detr.list[[i]],X)
}
alpha0 <- -3
sigma <- 0.15
alphas <- c(1,-1)
beta0 <- 3
betas <- c(-1,1)
pixelArea <- (dencovs$y[2] - dencovs$y[1])^2
mu <- exp(beta0 + as.matrix(dencovs[,3:4])%*%betas)*pixelArea
EN <- sum(mu)
N <- rpois(1, EN)
pi <- mu/sum(mu)
s <- dencovs[sample(1:nrow(dencovs), size=N, replace=TRUE, prob=pi),1:2]
J <- nrow(X)
K <- 10
yc <- d <- p <- matrix(NA, N, J)
D <- e2dist(s,X)
loglam <- t(alpha0 - t((1/(2*sigma*sigma))*D*D) + as.vector(trapcovs%*%alphas))
p <- 1-exp(-exp(loglam))
for(i in 1:N) {
for(j in 1:J) {
yc[i,j] <- rbinom(1, K, p[i,j])
}
}
detected <- apply(yc>0, 1, any)
yscr <- yc[detected,]
ntel <- 5
nfixes <- 100
poss.tel <- which(s[,1]>0.2 & s[,1]<0.8 & s[,2]>0.2 & s[,2]<0.8)
stel.id <- sample(poss.tel,ntel)
stel <- s[stel.id,]
ytel <- matrix(NA,ntel,nrow(detcovs))
d <- e2dist(stel,detcovs[,1:2])
lam <- t(exp(1 - t((1/(2*sigma*sigma))*d*d) + as.vector(as.matrix(detcovs[,3:4])%*%alphas)))
for(i in 1:ntel){
ytel[i,] <- rmultinom(1,nfixes,lam[i,]/sum(lam[i,]))
}
Specify starting values and call mle2 function:
start1 <- list(alpha0=alpha0,sigma=sigma,alphas=alphas,betas=betas,n0=log(N-nrow(yscr)))
parnames(CSCR.RSF.intlik2.EXAMPLE) <- names(start)
out1 <- mle2(CSCR.RSF.intlik2.EXAMPLE,start=start1,method="SANN",optimizer="optim",
data=list(yscr=yscr,K=K,X=X,trapcovs=trapcovs,Gden=dencovs[,1:2],Gdet=detcovs[,1:2],
ytel=ytel,stel=stel,dencovs=as.matrix(dencovs[,3:4]),detcovs=as.matrix(detcovs[,3:4]))
)