I have written this code:
P<-4000000 #population
j<-4 #exposures
budget<-7000 #Euros
vehicles<-data.frame(A1=c(2000001,1700000,1619200),A2=c(2500000,1900000,1781120),Price=c(2000,1500,1000)) #A1: Audience1, A2: Audience2 & Price-insertion
end.i<-FALSE
for(i in seq(4,1000,1)){
for(k in 1:nrow(vehicles)){
R1=vehicles$A1[k]/P;R2=vehicles$A2[k]/P
shape1=((R1)*((R2)-(R1)))/(2*(R1)-(R1)*(R1)-(R2));shape1
shape2=(shape1*(1-(R1)))/(R1);shape2
t <- dbetabinom.ab(1:i, size = i, shape1 = shape1, shape2 = shape2)
print(t[j])
print(paste(k,"vehicles",sep=" "))
print(paste(i,"insertions", sep=" "))
price<-i*vehicles$Price[k]
print(paste(price,"Euros",sep=" "))
if((i*vehicles$Price[k])<=budget& t[j]>=0.024 & t[j]<=0.025){end.i<-TRUE;break;}
};
if (end.i) break;
}
This code allows extracting the number of insertions (i) necessary to reach 'X individuals (t[j] probability x population) exposed j times' (my objective).
However, the code ends when it reachs a solution. I would be interested in knowing how to program the code to estimate all the possible solutions, and choose one that would also allow to minimize the cost of the insertions (vehicles$Price[k] x i).
Kind regards,
Majesus
Try this. Just append the solutions to a data frame (called out_put in this case)
P<-4000000 #population
j<-4 #exposures
budget<-7000 #Euros
vehicles<-data.frame(A1=c(2000001,1700000,1619200),A2=c(2500000,1900000,1781120),Price=c(2000,1500,1000)) #A1: Audience1, A2: Audience2 & Price-insertion
out_put = data.frame(TJ = NA,Vehicles = NA, Insertions = NA,Price_Euros = NA)
for(i in seq(4,1000,1)){
for(k in 1:nrow(vehicles)){
R1=vehicles$A1[k]/P;R2=vehicles$A2[k]/P
shape1=((R1)*((R2)-(R1)))/(2*(R1)-(R1)*(R1)-(R2))
shape2=(shape1*(1-(R1)))/(R1)
t <- dbetabinom.ab(1:i, size = i, shape1 = shape1, shape2 = shape2)
price<-i*vehicles$Price[k]
out_put = rbind(out_put,c(t[j],k,i,price))
}
}
out_put = out_put[2:nrow(out_put),]
rownames(out_put) = NULL
Related
not sure what I'm doing wrong here. I'm trying to get a cross-validation score for a mixture-of-two-gammas model.
llikGammaMix2 = function(param, x) {
if (any(param < 0) || param["p1"] > 1) {
return(-Inf)
} else {
return(sum(log(
dgamma(x, shape = param["k1"], scale = param["theta1"]) *
param["p1"] + dgamma(x, shape = param["k2"], scale = param["theta2"]) *
1
(1 - param["p1"])
)))
}
}
initialParams = list(
theta1 = 1,
k1 = 1.1,
p1 = 0.5,
theta2 = 10,
k2 = 2
)
for (i in 1:nrow(cichlids)) {
SWS1_training <- cichlids$SWS1 - cichlids$SWS1[i]
SWS1_test <- cichlids$SWS1[i]
MLE_training2 <-
optim(
par = initialParams,
fn = llikGammaMix2,
x = SWS1_training,
control = list(fnscale = -1)
)$par
LL_test2 <-
optim(
par = MLE_training2,
fn = llikGammaMix2,
x = SWS1_test,
control = list(fnscale = -1)
)$value
}
print(LL_test2)
This runs until it gets to the first optim(), then spits out Error in fn(par, ...) : attempt to apply non-function.
My first thought was a silly spelling error somewhere, but that doesn't seem to be the case. Any help is appreciated.
I believe the issue is in the return statement. It's unclear if you meant to multiply or add the last quantity (1 - param["p1"])))) to the return value. Based on being a mixture, I'm guessing you mean for it to be multiplied. Instead it just hangs at the end which throws issues for the function:
return(sum(log(dgamma(x, shape = param["k1"], scale = param["theta1"]) *
param["p1"] +
dgamma(x, shape = param["k2"], scale = param["theta2"]) *
(1 - param["p1"])))) ## ISSUE HERE: Is this what you meant?
There could be other issues with the code. I would double check that the function you are optimizing is what you think it ought to be. It's also hard to tell unless you give a reproducible example we might be able to use. Try to clear up the above issue and let us know if there are still problems.
Heres my code:
k = str(input())
s = str(input())
x = 0
if len(k)<= len(s):
while x < len(k):
if k[x] == s[x]:
k = k.replace(k[x], "")
s = s.replace(s[x], "")
x = x+1
else:
x = x+1
it says that the else is not working and i don't know how to debugg it. Thank you.
I think this method is much more elegant than the original one, if the strings are very long and performance is important, I suggest to keep in mind that strings in Python are immutable - you can optimize the code by using one of the suggested methods in the following article
k = "String1Avc"
s = "String2Abc"
new_k = ""
new_s = ""
for i in range(min(len(k), len(s))):
if k[i] != s[i]:
new_k += k[i]
new_s += s[i]
print(new_k)
print(new_s)
output:
1v
2b
I have a general function I have provided an example below if simple linear regression:
x = 1:30
y = 0.7 * x + 32
Data = rnorm(30, mean = y, sd = 2.5);
lin = function(pars = c(grad,cons)) {
expec = pars[1] * x + pars[2];
SSE = sum((Data - expec)^2)
return(SSE)
}
start_vals = c(0.2,10)
lin(start_vals)
estimates = optim(par = start_vals, fn = lin);
## plot the data
Fit = estimates$par[1] * x + estimates$par[2]
plot(x,Data)
lines(x, Fit, col = "red")
So that's straight forward. What I want is to store the expectation for the last set of parameters, so that once I have finished optimizing I can view them. I have tried using a global container and trying to populating it if the function is executed but it doesn't work, e.g
Expectation = c();
lin = function(pars = c(grad,cons)) {
expec = pars[1] * x + pars[2];
Expectation = expec;
SSE = sum((Data - expec)^2)
return(SSE)
}
start_vals = c(0.2,10)
estimates = optim(par = start_vals, fn = lin);
Expectation ## print the expectation that would relate to estimates$par
I know that this is trivial to do outside of the function, but my actual problem (which is analogous to this) is much more complex. Basically I need to return internal information that can't be retrospectively calculated. Any help is much appreciated.
you should use <<- instead of = in your lin function, Expectation <<- expec,The operators <<- and ->> are normally only used in functions, and cause a search to be made through parent environments for an existing definition of the variable being assigned.
I am using package fda in particular function fRegress. This function includes another function that is called eigchk and checks if coeffients matrix is singular.
Here is the function as the package owners (J. O. Ramsay, Giles Hooker, and Spencer Graves) wrote it.
eigchk <- function(Cmat) {
# check Cmat for singularity
eigval <- eigen(Cmat)$values
ncoef <- length(eigval)
if (eigval[ncoef] < 0) {
neig <- min(length(eigval),10)
cat("\nSmallest eigenvalues:\n")
print(eigval[(ncoef-neig+1):ncoef])
cat("\nLargest eigenvalues:\n")
print(eigval[1:neig])
stop("Negative eigenvalue of coefficient matrix.")
}
if (eigval[ncoef] == 0) stop("Zero eigenvalue of coefficient matrix.")
logcondition <- log10(eigval[1]) - log10(eigval[ncoef])
if (logcondition > 12) {
warning("Near singularity in coefficient matrix.")
cat(paste("\nLog10 Eigenvalues range from\n",
log10(eigval[ncoef])," to ",log10(eigval[1]),"\n"))
}
}
As you can see last if condition checks if logcondition is bigger than 12 and prints then the ranges of eigenvalues.
The following code implements the useage of regularization with roughness pennalty. The code is taken from the book "Functional data analysis with R and Matlab".
annualprec = log10(apply(daily$precav,2,sum))
tempbasis =create.fourier.basis(c(0,365),65)
tempSmooth=smooth.basis(day.5,daily$tempav,tempbasis)
tempfd =tempSmooth$fd
templist = vector("list",2)
templist[[1]] = rep(1,35)
templist[[2]] = tempfd
conbasis = create.constant.basis(c(0,365))
betalist = vector("list",2)
betalist[[1]] = conbasis
SSE = sum((annualprec - mean(annualprec))^2)
Lcoef = c(0,(2*pi/365)^2,0)
harmaccelLfd = vec2Lfd(Lcoef, c(0,365))
betabasis = create.fourier.basis(c(0, 365), 35)
lambda = 10^12.5
betafdPar = fdPar(betabasis, harmaccelLfd, lambda)
betalist[[2]] = betafdPar
annPrecTemp = fRegress(annualprec, templist, betalist)
betaestlist2 = annPrecTemp$betaestlist
annualprechat2 = annPrecTemp$yhatfdobj
SSE1.2 = sum((annualprec-annualprechat2)^2)
RSQ2 = (SSE - SSE1.2)/SSE
Fratio2 = ((SSE-SSE1.2)/3.7)/(SSE1/30.3)
resid = annualprec - annualprechat2
SigmaE. = sum(resid^2)/(35-annPrecTemp$df)
SigmaE = SigmaE.*diag(rep(1,35))
y2cMap = tempSmooth$y2cMap
stderrList = fRegress.stderr(annPrecTemp, y2cMap, SigmaE)
betafdPar = betaestlist2[[2]]
betafd = betafdPar$fd
betastderrList = stderrList$betastderrlist
betastderrfd = betastderrList[[2]]
As penalty factor the authors use certain lambda.
The following code implements the search for the appropriate `lambda.
loglam = seq(5,15,0.5)
nlam = length(loglam)
SSE.CV = matrix(0,nlam,1)
for (ilam in 1:nlam) {
lambda = 10ˆloglam[ilam]
betalisti = betalist
betafdPar2 = betalisti[[2]]
betafdPar2$lambda = lambda
betalisti[[2]] = betafdPar2
fRegi = fRegress.CV(annualprec, templist,
betalisti)
SSE.CV[ilam] = fRegi$SSE.CV
}
By changing the value of the loglam and cross validation I suppose to equaire the best lambda, yet if the length of the loglam is to big or its values lead the coefficient matrix to singulrity. I recieve the following message:
Log10 Eigenvalues range from
-5.44495317739048 to 6.78194912518214
Created by the function eigchk as I already have mentioned above.
Now my question is, are there any way to catch this so called warning? By catch I mean some function or method that warns me when this has happened and I could adjust the values of the loglam. Since there is no actual warning definition in the function beside this print of the message I ran out of ideas.
Thank you all a lot for your suggestions.
By "catch the warning", if you mean, will alert you that there is a potential problem with loglam, then you might want to look at try and tryCatch functions. Then you can define the behavior you want implemented if any warning condition is satisfied.
If you just want to store the output of the warning (which might be assumed from the question title, but may not be what you want), then try looking into capture.output.
I want to create a Dynamic model of butterfly ecology using deSolve. the simulation runs over several simulation years and some events are triggered by the day of the year (so I added one state variable of days ). in order to trigger those events I want to use an ifelse statement and it works fine, until I try to put in the ifelse statement an operation involving another state variable: D.egg.sus=(ifelse(days<270,(400 * adult.sus),0)).
When I do so, the simulation runs, but it seems to ignore the ifelse statement.
can anyone help me please? here is my full code:
days = 1
egg.sus = 0
larvae.sus = 0
pupae.sus = 0
adult.sus = 1000
state = c(days = days, egg.sus=egg.sus, larvae.sus=larvae.sus,
pupae.sus=pupae.sus, adult.sus=adult.sus)
model = function(t, state, parameters)
{
with(as.list(c(state, parameters)),
{
D.Days = 1
D.egg.sus =
( ifelse(days<270, (400*adult.sus) ,0)) ## This is the line causing trouble
(- egg.sus/5)
(- egg.sus * rbeta(1, 6.038892/5,1.4612593)*.95)
D.larvae.sus =
(+ egg.sus/5)
(- larvae.sus * rbeta(1, 0.248531/14,0.2094379)*0.95)
(- larvae.sus/14)
D.pupae.sus =
(+ larvae.sus/14)
(- pupae.sus * rbeta(1, 0.022011/15, 1.43503))
(- pupae.sus/15)
D.adult.sus =
(+ pupae.sus/15)
(- adult.sus/30)
list(c( D.Days, D.egg.sus, D.larvae.sus,D.pupae.sus, D.adult.sus))
}
)}
events <- data.frame(var = c('days'),
time = seq(364,73000,by=365) ,
value = 0,
method = "rep")
require(deSolve)
times = seq(1,900, by = 1)
out = ode(y=state, times = times, func = model, parms = parameters, events = list(data=events))
dev.cur()
plot(out, col = 2)
I don't know about five years ago, but at the time of writing ifelse works just fine with deSolve. Your issue seems to be that the returned value of your condition did not return as you wanted. Instead you might want to use a flag variable or save the return from your ifelse to a variable that you can then use in your model.
Here is a small example demonstrating how you can use a flag in your model parameters
library(deSolve)
# Our model function, first-order
# One parameter is a flag that is used by the ifelse to set Ka to zero if TRUE.
onecomp <- function(t, state, parameters) {
with(as.list(c(state, parameters)), {
Ka = ifelse(flag == TRUE, 0, Ka) # Use ifelse to check for negative values
dX <- - X*Ka
dY <- X*Ka - Y*Ke
list(c(dX, dY))
})
}
times <- seq(0, 24, by = 0.01)
parameters <- c(Ka = 0.8 , Ke = 0.2, flag = FALSE)
state <- c(X = 100 , Y = 0)
# Test for TRUE
out <- ode(y = state, times = times, func = onecomp, parms = parameters)
plot(out)
# Test for FALSE, where we expect no transfer.
parameters <- c(Ka = 0.8 , Ke = 0.2, flag = TRUE)
out <- ode(y = state, times = times, func = onecomp, parms = parameters)
plot(out)
Created on 2021-01-13 by the reprex package (v0.3.0)
The model in the question has several issues:
You can use the simulation time directly instead of a state variable days, because simulation time in the function is given as t. Then just use the modulo operator %% and you don't need events anymore.
the parameters are all hard-coded, so use parms=NULLin the ode function.
line breaks are wrong. R continues lines if (and only if) they are not yet syntactically complete. Therefore, remove obsolete parentheses and, for example, put the - operator
at the end of the line.
Use of a random number e.g. rgamma within an ODE function is a very bad idea, especially for solvers with automatic time steps. ODEs are deterministic by definition. One may consider a fixed time-step solver instead, e.g. method="euler"with a very small time step or (much better) to provide the random values as an external input (forcing).
If you use an external input, you can avoid the ifelse anyway.