I was using the NLoptr package for solving an optimization problem of a 9 variables cost function using the program as:
function(x){return( list( "objective" = 0.0404*x[1]^2 + 4.4823*x[1] + 0.4762+0.024*x[2]^2 + 3.9767*x[2] + 0.3737+0.0246*x[3]^2 + 3.6992*x[3] + 0.9425+0.0214*x[4]^2 + 3.5896*x[4] + 0.7615+0.0266*x[5]^2 + 3.8197*x[5] + 0.2799+0.0262*x[6]^2 + 3.7884*x[6] + 0.307+0.0362*x[7]^2 + 4.4927*x[7] + 0.1549+0.0344*x[8]^2 + 4.4066*x[8] - 0.2472+0.0241*x[9]^2 + 4.227*x[9],"gradient" = c(2*0.0404*x[1]+4.4823, 2*0.024*x[2]+3.9767, 2*0.0246*x[3], 2*0.0214*x[4]+3.5896, 2*0.0266*x[5]+3.8197,2*0.0262*x[6]+3.7884,2*0.0362*x[7]+4.4927, 2*0.0344*x[8]+4.4066, 2*0.0241*x[9]+4.227)))}
function( x ) {
constr <- c(x[1] + x[2]+ x[3] + x[4]+x[5]+x[6]+x[7]+x[8]+x[9]-Balance)
grad <- c(1,1,1,1,1,1,1,1,1)
return( list( "constraints"=constr, "jacobian"=grad ) )
}
lb<-c(50,50,50,50,50,50,50,50,50)
ub<-c(0,0,0,0,0,0,0,0)
x_0<-c(25,25,25,25,25,25,25,25,25)
local_opts <- list( "algorithm" = "NLOPT_LD_MMA","xtol_rel" = 1.0e-9 )
opts <- list( "algorithm" = "NLOPT_LD_AUGLAG","xtol_rel" = 1.0e-9,"maxeval" = 10000, "local_opts" = local_opts )
res <- nloptr(x0=x_0, eval_f=eval_f,lb=lb,ub=ub,eval_g_eq=eval_g_eq,opts=opts)
The code works fine but the problem is that I need to solve this optimization for a period of 168h and each time step the lower bounds and upper bounds have to be different. Has anyone implemented this before?
BR
I highly suggest you to use OSQP for that. You can download it from CRAN. You can find an example for updating the problem vectors in the manual. I have rewritten it here:
library(Matrix)
# Define problem data in the form
# minimize (1/2) x' P x + q' x
# subject to l <= A x <= u
#
P <- Matrix(c(11., 0., 0., 0.), 2, 2, sparse = TRUE)
q <- c(3., 4.)
A <- Matrix(c(-1., 0., -1., 2., 3., 0., -1., -3., 5., 4.), 5, 2, sparse = TRUE)
u <- c(0., 0., -15., 100., 80)
l <- rep_len(-Inf, 5)
settings <- osqpSettings(verbose = FALSE)
model <- osqp(P, q, A, l, u, settings)
# Solve
res <- model$Solve()
# Get solution
x_opt <- res$x
# Define new vector
q_new <- c(10., 20.)
# Update model and solve again
model$Update(q = q_new)
res <- model$Solve()
# Get new solution
x_opt_new <- res$x
Disclamer: I am one of the OSQP authors.
Related
I want to get a 95% confidence interval for the following question.
I have written function f_n in my R code. I first randomly sample 100 with Normal and then I define function h for lambda. Then I can get f_n. My question is that how to define a function of f_n-chi-square and use uniroot` to find Confidence interval.
# I first get 100 samples
set.seed(201111)
x=rlnorm(100,0,2)
Based on the answer by #RuiBarradas, I try the following code.
set.seed(2011111)
# I define function h, and use uniroot function to find lambda
h <- function(lam, n)
{
sum((x - theta)/(1 + lam*(x - theta)))
}
# sample size
n <- 100
# the parameter of interest must be a value in [1, 12],
#true_theta<-1
#true_sd<- exp(2)
#x <- rnorm(n, mean = true_theta, sd = true_sd)
x=rlnorm(100,0,2)
xmax <- max(x)
xmin <- min(x)
theta_seq = seq(from = 1, to = 12, by = 0.01)
f_n <- rep(NA, length(theta_seq))
for (i in seq_along(theta_seq))
{
theta <- theta_seq[i]
lambdamin <- (1/n-1)/(xmax - theta)
lambdamax <- (1/n-1)/(xmin - theta)
lambda = uniroot(h, interval = c(lambdamin, lambdamax), n = n)$root
f_n[i] = -sum(log(1 + lambda*(x - theta)))
}
j <- which.max(f_n)
max_fn <- f_n[j]
mle_theta <- theta_seq[j]
plot(theta_seq, f_n, type = "l",
main = expression(Estimated ~ theta),
xlab = expression(Theta),
ylab = expression(f[n]))
points(mle_theta, f_n[j], pch = 19, col = "red")
segments(
x0 = c(mle_theta, xmin),
y0 = c(min(f_n)*2, max_fn),
x1 = c(mle_theta, mle_theta),
y1 = c(max_fn, max_fn),
col = "red",
lty = "dashed"
)
I got the following plot of f_n.
For 95% CI, I try
LR <- function(theta, lambda)
{
2*sum(log(1 + lambda*(x - theta))) - qchisq(0.95, df = 1)
}
lambdamin <- (1/n-1)/(xmax - mle_theta)
lambdamax <- (1/n-1)/(xmin - mle_theta)
lambda <- uniroot(h, interval = c(lambdamin, lambdamax), n = n)$root
uniroot(LR, c(xmin, mle_theta), lambda = lambda)$root
The result is 0.07198144. Then the logarithm is log(0.07198144)=-2.631347.
But there is NA in the following code.
uniroot(LR, c(mle_theta, xmax), lambda = lambda)$root
So the 95% CI is theta >= -2.631347.
But the question is that the 95% CI should be a closed interval...
Here is a solution.
First of all, the data generation code is wrong, the parameter theta is in the interval [1, 12], and the data is generated with rnorm(., mean = 0, .). I change this to a true_theta = 5.
set.seed(2011111)
# I define function h, and use uniroot function to find lambda
h <- function(lam, n)
{
sum((x - theta)/(1 + lam*(x - theta)))
}
# sample size
n <- 100
# the parameter of interest must be a value in [1, 12],
true_theta <- 5
true_sd <- 2
x <- rnorm(n, mean = true_theta, sd = true_sd)
xmax <- max(x)
xmin <- min(x)
theta_seq <- seq(from = xmin + .Machine$double.eps^0.5,
to = xmax - .Machine$double.eps^0.5, by = 0.01)
f_n <- rep(NA, length(theta_seq))
for (i in seq_along(theta_seq))
{
theta <- theta_seq[i]
lambdamin <- (1/n-1)/(xmax - theta)
lambdamax <- (1/n-1)/(xmin - theta)
lambda = uniroot(h, interval = c(lambdamin, lambdamax), n = n)$root
f_n[i] = -sum(log(1 + lambda*(x - theta)))
}
j <- which.max(f_n)
max_fn <- f_n[j]
mle_theta <- theta_seq[j]
plot(theta_seq, f_n, type = "l",
main = expression(Estimated ~ theta),
xlab = expression(Theta),
ylab = expression(f[n]))
points(mle_theta, f_n[j], pch = 19, col = "red")
segments(
x0 = c(mle_theta, xmin),
y0 = c(min(f_n)*2, max_fn),
x1 = c(mle_theta, mle_theta),
y1 = c(max_fn, max_fn),
col = "red",
lty = "dashed"
)
LR <- function(theta, lambda)
{
2*sum(log(1 + lambda*(x - theta))) - qchisq(0.95, df = 1)
}
lambdamin <- (1/n-1)/(xmax - mle_theta)
lambdamax <- (1/n-1)/(xmin - mle_theta)
lambda <- uniroot(h, interval = c(lambdamin, lambdamax), n = n)$root
uniroot(LR, c(xmin, mle_theta), lambda = lambda)$root
#> [1] 4.774609
Created on 2022-03-25 by the reprex package (v2.0.1)
The one-sided CI95 is theta >= 4.774609.
I am trying to optimize a function in R using the nloptr package.
Here is the code:
library('nloptr')
hn <- function(x, n)
{
hret <- 0
if (n == 0)
{
hret <- 1
return (hret)
}
else if (n == 1)
{
hret <- 2*x
return (hret)
}
else
{
hn2 <- 1
hn1 <- 2*x
all_n <- seq(from = 2, to = n, by = 1)
for (ni in all_n)
{
hn = (2*x*hn1/sqrt(ni)) + (2*sqrt( (ni-1)/ni)*hn2)
#print(hn)
hn2 = hn1
hn1 = hn
}
hret <- hn
return (hret)
}
}
term <- function(alpha, r, theta, n)
{
beta = alpha*cosh(r) - Conj(alpha)*exp(1i*theta)*(sinh(r))
hnterm <- beta/(sqrt(exp(1i*theta)*sinh(2*r)))
term4 <- hn(hnterm, n)
logterm1 <- (1/2)*log(cosh(r))
logterm2 <- -((1/2)*(abs(alpha)^2)) + ((1/2)* (Conj(alpha)^2))*exp(1i*theta)*tanh(r)
logterm3 <- (n/2)*( log (((1/2)*exp(1i*theta)*tanh(r)) ))
logterm4 <- log ( term4)
logA <- logterm1 + logterm2 + logterm3 + logterm4
A <- exp(logA)
retval <- c(A)
return (A)
}
PESQ <- function(x, alpha)
{
p0 <- x[1]
p1 <- x[2]
beta <- x[3]
r <- x[4]
theta <- x[5]
N <- 30
NI <- seq(from = 0, to = N, by = 1)
elements <- rep(0+1i*0, length(NI))
elements_abs_sqr <- rep(0, length(NI))
pr <- rep(0, length(NI))
total <- 0 + 1i*0
for (n in NI)
{
w <-term(2*alpha + beta, r, theta, n)
elements[n+1] <- w
elements_abs_sqr[n+1] <-(abs(w)^2)
}
total <- sum(elements_abs_sqr)
for (n in NI)
{
pr[n+1] <- Re(elements[n+1]/sqrt(total))
pr[n+1] <- pr[n+1]^2
}
p_off_given_on <- pr[1]
elements <- rep(0+1i*0, length(NI))
elements_abs_sqr <- rep(0, length(NI))
pr <- rep(0, length(NI))
total <- 0 + 1i*0
for (n in NI)
{
w <-term(beta, r, theta, n)
elements[n+1] <- w
elements_abs_sqr[n+1] <-(abs(w)^2)
}
total <- sum(elements_abs_sqr)
for (n in NI)
{
pr[n+1] <- Re(elements[n+1]/sqrt(total))
pr[n+1] <- pr[n+1]^2
}
p_on_given_off = 1 - pr[1]
P_e = p0*p_off_given_on + p1*p_on_given_off
return(P_e)
}
eval_g_eq <- function(x)
{
return ( x[1] + x[2] - 1)
}
lb <- c(0, 0, -Inf, 0.001, -pi)
ub <- c(1, 1, Inf, Inf, pi)
local_opts <- list("algorithm" = "NLOPT_LD_MMA",
"xtol_rel"=1.0e-18)
# Set optimization options.
opts <- list("algorithm" = "NLOPT_LN_AUGLAG",
"xtol_rel" = 1.0e-18, "local_opts" = local_opts, "maxeval" = 10000)
x0 <- c(0.1,0.9, 0.1, 0.01, 0.7853982)
alpha <- 0.65
eval_g_ineq <- function(x)
{
return (c (- x[1] - x[2],
x[1] + x[2] - 1)
)
}
eval_f <- function(x)
{
ret = PESQ(x, alpha)
return(ret)
}
res <- nloptr ( x0 = x0,
eval_f = eval_f,
eval_g_eq = eval_g_eq,
eval_g_ineq = eval_g_ineq,
lb = lb,
ub = ub,
opts = opts )
print(res)
Upon running this code, I get the following error:
Error in nloptr(x0 = x0, eval_f = eval_f, eval_g_ineq = eval_g_ineq, eval_g_eq = eval_g_eq, :
STRING_ELT() can only be applied to a 'character vector', not a 'NULL'
Calls: ... withCallingHandlers -> withVisible -> eval -> eval -> nloptr
Execution halted
The weird thing, if I use "algorithm"="NLOPT_LN_COBYLA" in opts and I remove the equality constraint eval_g_eq in nloptr call, it runs fine and I get a solution. However, I need equality constraints for my work.
How should I fix the issue?
This is still a bit of a guess, but: the only possibility I can come up with is that using a derivative-based optimizer for your local optimizer at the same time as you use a derivative-free optimizer for the global solution (i.e., the NLopt docs clarify that LN in NLOPT_LN_AUGLAG denotes "local, derivative-free" whereas _LD_ would denote "local, derivative-based") is causing the problem? I got an answer (not sure if it's correct though!) by using "NLOPT_LN_COBYLA" as the algorithm in local_opts: with everything else as in your code,
local_opts <- list("algorithm" = "NLOPT_LN_COBYLA",
"xtol_rel"=1.0e-18)
# Set optimization options.
opts <- list("algorithm" = "NLOPT_LN_AUGLAG",
"xtol_rel" = 1.0e-18, "local_opts" = local_opts, "maxeval" = 10000)
print(res <- nloptr ( x0 = x0,
eval_f = eval_f,
eval_g_eq = eval_g_eq,
eval_g_ineq = eval_g_ineq,
lb = lb,
ub = ub,
opts = opts ))
Returns
Call:
nloptr(x0 = x0, eval_f = eval_f, lb = lb, ub = ub, eval_g_ineq = eval_g_ineq,
eval_g_eq = eval_g_eq, opts = opts)
Minimization using NLopt version 2.4.2
NLopt solver status: 3 ( NLOPT_FTOL_REACHED: Optimization stopped because
ftol_rel or ftol_abs (above) was reached. )
Number of Iterations....: 102
Termination conditions: xtol_rel: 1e-18 maxeval: 10000
Number of inequality constraints: 2
Number of equality constraints: 1
Optimal value of objective function: 2.13836819774604e-05
Optimal value of controls: 0 1 -0.0003556752 0.006520304 2.037835
As far as I can see this has done a plausible solution respecting the constraints:
the reason for stopping ("ftol_rel or ftol_abs ... was reached") is sensible
it used a reasonable number (102) of iterations to get there (and not maxeval)
eval_g_eq(res$solution) does give 0 (which we can also see by inspection, as the condition is x[1]+x[2]-1==0).
The inequality conditions are -x1-x2 and x1+x2-1; I'm not sure how the sign of these inequalities is defined/determined? The same as x0, i.e. assuming the initial conditions are feasible? (If x1+x2 is constrained to equal 1, I'm not sure why the inequality constraints here can ever do anything?)
eval_f(x0) is considerably larger than eval_f(res$solution) ...
This is my first attempt at fitting a non-linear model in R, so please bear with me.
Problem
I am trying to understand why nls() is giving me this error:
Error in nlsModel(formula, mf, start, wts): singular gradient matrix at initial parameter estimates
Hypotheses
From what I've read from other questions here at SO it could either be because:
my model is discontinuous, or
my model is over-determined, or
bad choice of starting parameter values
So I am calling for help on how to overcome this error. Can I change the model and still use nls(), or do I need to use nls.lm from the minpack.lm package, as I have read elsewhere?
My approach
Here are some details about the model:
the model is a discontinuous function, a kind of staircase type of function (see plot below)
in general, the number of steps in the model can be variable yet they are fixed for a specific fitting event
MWE that shows the problem
Brief explanation of the MWE code
step_fn(x, min = 0, max = 1): function that returns 1 within the interval (min, max] and 0 otherwise; sorry about the name, I realize now it is not really a step function... interval_fn() would be more appropriate I guess.
staircase(x, dx, dy): a summation of step_fn() functions. dx is a vector of widths for the steps, i.e. max - min, and dy is the increment in y for each step.
staircase_formula(n = 1L): generates a formula object that represents the model modeled by the function staircase() (to be used with the nls() function).
please do note that I use the purrr and glue packages in the example below.
Code
step_fn <- function(x, min = 0, max = 1) {
y <- x
y[x > min & x <= max] <- 1
y[x <= min] <- 0
y[x > max] <- 0
return(y)
}
staircase <- function(x, dx, dy) {
max <- cumsum(dx)
min <- c(0, max[1:(length(dx)-1)])
step <- cumsum(dy)
purrr::reduce(purrr::pmap(list(min, max, step), ~ ..3 * step_fn(x, min = ..1, max = ..2)), `+`)
}
staircase_formula <- function(n = 1L) {
i <- seq_len(n)
dx <- sprintf("dx%d", i)
min <-
c('0', purrr::accumulate(dx[-n], .f = ~ paste(.x, .y, sep = " + ")))
max <- purrr::accumulate(dx, .f = ~ paste(.x, .y, sep = " + "))
lhs <- "y"
rhs <-
paste(glue::glue('dy{i} * step_fn(x, min = {min}, max = {max})'),
collapse = " + ")
sc_form <- as.formula(glue::glue("{lhs} ~ {rhs}"))
return(sc_form)
}
x <- seq(0, 10, by = 0.01)
y <- staircase(x, c(1,2,2,5), c(2,5,2,1)) + rnorm(length(x), mean = 0, sd = 0.2)
plot(x = x, y = y)
lines(x = x, y = staircase(x, dx = c(1,2,2,5), dy = c(2,5,2,1)), col="red")
my_data <- data.frame(x = x, y = y)
my_model <- staircase_formula(4)
params <- list(dx1 = 1, dx2 = 2, dx3 = 2, dx4 = 5,
dy1 = 2, dy2 = 5, dy3 = 2, dy4 = 1)
m <- nls(formula = my_model, start = params, data = my_data)
#> Error in nlsModel(formula, mf, start, wts): singular gradient matrix at initial parameter estimates
Any help is greatly appreciated.
I assume you are given a vector of observations of length len as the ones plotted in your example, and you wish to identify k jumps and k jump sizes. (Or maybe I misunderstood you; but you have not really said what you want to achieve.)
Below I will sketch a solution using Local Search. I start with your example data:
x <- seq(0, 10, by = 0.01)
y <- staircase(x,
c(1,2,2,5),
c(2,5,2,1)) + rnorm(length(x), mean = 0, sd = 0.2)
A solution is a list of positions and sizes of the jumps. Note that I use vectors to store these data, as it will become cumbersome to define variables when you have 20 jumps, say.
An example (random) solution:
k <- 5 ## number of jumps
len <- length(x)
sol <- list(position = sample(len, size = k),
size = runif(k))
## $position
## [1] 89 236 859 885 730
##
## $size
## [1] 0.2377453 0.2108495 0.3404345 0.4626004 0.6944078
We need an objective function to compute the quality of the solution. I also define a simple helper function stairs, which is used by the objective function.
The objective function abs_diff computes the average absolute difference between the fitted series (as defined by the solution) and y.
stairs <- function(len, position, size) {
ans <- numeric(len)
ans[position] <- size
cumsum(ans)
}
abs_diff <- function(sol, y, stairs, ...) {
yy <- stairs(length(y), sol$position, sol$size)
sum(abs(y - yy))/length(y)
}
Now comes the key component for a Local Search: the neighbourhood function that is used to evolve the solution. The neighbourhood function takes a solution and changes it slightly. Here, it will either pick a position or a size and modify it slightly.
neighbour <- function(sol, len, ...) {
p <- sol$position
s <- sol$size
if (runif(1) > 0.5) {
## either move one of the positions ...
i <- sample.int(length(p), size = 1)
p[i] <- p[i] + sample(-25:25, size = 1)
p[i] <- min(max(1, p[i]), len)
} else {
## ... or change a jump size
i <- sample.int(length(s), size = 1)
s[i] <- s[i] + runif(1, min = -s[i], max = 1)
}
list(position = p, size = s)
}
An example call: here the new solution has its first jump size changed.
## > sol
## $position
## [1] 89 236 859 885 730
##
## $size
## [1] 0.2377453 0.2108495 0.3404345 0.4626004 0.6944078
##
## > neighbour(sol, len)
## $position
## [1] 89 236 859 885 730
##
## $size
## [1] 0.2127044 0.2108495 0.3404345 0.4626004 0.6944078
I remains to run the Local Search.
library("NMOF")
sol.ls <- LSopt(abs_diff,
list(x0 = sol, nI = 50000, neighbour = neighbour),
stairs = stairs,
len = len,
y = y)
We can plot the solution: the fitted line is shown in blue.
plot(x, y)
lines(x, stairs(len, sol.ls$xbest$position, sol.ls$xbest$size),
col = "blue", type = "S")
Try DE instead:
library(NMOF)
yf= function(params,x){
dx1 = params[1]; dx2 = params[2]; dx3 = params[3]; dx4 = params[4];
dy1 = params[5]; dy2 = params[6]; dy3 = params[7]; dy4 = params[8]
dy1 * step_fn(x, min = 0, max = dx1) + dy2 * step_fn(x, min = dx1,
max = dx1 + dx2) + dy3 * step_fn(x, min = dx1 + dx2, max = dx1 +
dx2 + dx3) + dy4 * step_fn(x, min = dx1 + dx2 + dx3, max = dx1 +
dx2 + dx3 + dx4)
}
algo1 <- list(printBar = FALSE,
nP = 200L,
nG = 1000L,
F = 0.50,
CR = 0.99,
min = c(0,1,1,4,1,4,1,0),
max = c(2,3,3,6,3,6,3,2))
OF2 <- function(Param, data) { #Param=paramsj data=data2
x <- data$x
y <- data$y
ye <- data$model(Param,x)
aux <- y - ye; aux <- sum(aux^2)
if (is.na(aux)) aux <- 1e10
aux
}
data5 <- list(x = x, y = y, model = yf, ww = 1)
system.time(sol5 <- DEopt(OF = OF2, algo = algo1, data = data5))
sol5$xbest
OF2(sol5$xbest,data5)
plot(x,y)
lines(data5$x,data5$model(sol5$xbest, data5$x),col=7,lwd=2)
#> sol5$xbest
#[1] 1.106396 12.719182 -9.574088 18.017527 3.366852 8.721374 -19.879474 1.090023
#> OF2(sol5$xbest,data5)
#[1] 1000.424
Consider the following example of nonlinear optimization problem. The procedure is too slow to apply in simulation studies. For example, in case of my studies, it takes 2.5 hours for only one replication. How to speed up the process so that the processing time could also be optimized?
library(mvtnorm)
library(alabama)
n = 200
X <- matrix(0, nrow = n, ncol = 2)
X[,1:2] <- rmvnorm(n = n, mean = c(0,0), sigma = matrix(c(1,1,1,4),
ncol = 2))
x0 = matrix(c(X[1,1:2]), nrow = 1)
y0 = x0 - 0.5 * log(n) * (colMeans(X) - x0)
X = rbind(X, y0)
x01 = y0[1]
x02 = y0[2]
x1 = X[,1]
x2 = X[,2]
pInit = matrix(rep(0.1, n + 1), nrow = n + 1)
outopt = list(kkt2.check=FALSE, "trace" = FALSE)
f1 <- function(p) sum(sqrt(pmax(0, p)))/sqrt(n+1)
heq1 <- function(p) c(sum(x1 * p) - x01, sum(x2 * p) - x02, sum(p) - 1)
hin1 <- function(p) p - 1e-06
sol <- alabama::auglag(pInit, fn = function(p) -f1(p),
heq = heq1, hin = hin1,
control.outer = outopt)
-1 * sol$value
I am trying to obtain a steady state for a spatially-explicit Lotka-Volterra competition model of two competing species (with spatial diffusion). Here is the model (without diffusion term):
http://en.wikipedia.org/wiki/Competitive_Lotka%E2%80%93Volterra_equations
where I let r1 = r2 = rG & alpha12 = alpha 21 = a. The carrying capacity of species 1 is assumed to vary linearly across space x i.e. K1 = x (while K2 = 0.5). And we assume Neumann BC. The spatial domain x is from 0 to 1.
Here is the example of coding in R for this model:
LVcomp1D <- function (time, state, parms, N, Da, x, dx) {
with (as.list(parms), {
S1 <- state[1:N]
S2 <- state[(N+1):(2*N)]
## Dispersive fluxes; zero-gradient boundaries
FluxS1 <- -Da * diff(c(S1[1], S1, S1[N]))/dx
FluxS2 <- -Da * diff(c(S2[1], S2, S2[N]))/dx
## LV Competition
InteractS1 <- rG * S1 * (1- (S1/x)- ((a*S2)/x))
InteractS2 <- rG * S2 * (1- (S2/(K2))- ((a*S1)/(K2)))
## Rate of change = -Flux gradient + Interaction
dS1 <- -diff(FluxS1)/dx + InteractS1
dS2 <- -diff(FluxS2)/dx + InteractS2
return (list(c(dS1, dS2)))
})
}
pars <- c(rG = 1.0, a = 0.8, K2 = 0.5)
dx <- 0.001
x <- seq(0, 1, by = dx)
N <- length(x)
Da <- 0.001
state <- c(rep(0.5, N), rep(0.5, N))
print(system.time(
out <- steady.1D (y = state, func = LVcomp1D, parms = pars,
nspec = 2, N = N, x = x, dx = dx, Da = Da, pos = TRUE)
))
mf <- par(mfrow = c(2, 2))
plot(out, grid = x, xlab = "x", mfrow = NULL,
ylab = "N(x)", main = c("Species 1", "Species 2"), type = "l")
par(mfrow = mf)
The problem is I cannot get the steady state solutions of the model. I keep getting a horizontal line passing through x-axis. Can you please help me since I do not know what is wrong with this code.
Thank you