Unused arguments error in R studio - r

I get an error when I try to run this line of code:
nnetPred.model <- nnetPred(X, Y, step_size = 0.4,reg = 0.0002, h=50, niteration = 6000)
The error message is:
Error in nnetPred(X, Y, step_size = 0.4, reg = 2e-04, h = 50, niteration = 6000) :
unused arguments (step_size = 0.4, reg = 2e-04, h = 50, niteration = 6000)
My code is as below:
nnetPred <- function(X, Y, para = list()){
W <- para[[1]]
b <- para[[2]]
W2 <- para[[3]]
b2 <- para[[4]]
N <- nrow(X)
hidden_layer <- pmax(0, X%*% W + matrix(rep(b,N), nrow = N, byrow = T))
hidden_layer <- matrix(hidden_layer, nrow = N)
scores <- hidden_layer%*%W2 + matrix(rep(b2,N), nrow = N, byrow = T)
predicted_class <- apply(scores, 1, which.max)
return(predicted_class)
}
nnetPred.model <- nnetPred(X, Y, step_size = 0.4,reg = 0.0002, h=50, niteration = 6000)

It looks like you are trying to use variable arguments. In R, this means the ellipsis (...). This is how you would define the top of nnetPred to use variable arguments:
nnetPred <- function(X, Y, ...) {
para <- list(...)
This will work in your case, but is not really the best way to define that function, because it looks like you have a finite number of parameters. Only when you have an unknown number of parameters should you use variable argument lists. I would recommend simply putting your parameters in the parameter list. You can rename them if you want to:
nnetPred <- function(X, Y, step_size, reg, h, niteration) {
W <- step_size
b <- reg
W2 <- h
b2 <- niteration

try <- emd(xt2, tt2, boundary="wave")
Error in emd(xt2, tt2, boundary = "wave") :
unused argument (boundary = "wave")

Related

Different behaviour of R program in Ubuntu AWS and Ubuntu laptop

library(statcomp)
library(igraph)
library(NetworkDistance)
library(png)
a <- readPNG("001.png")
c <- list(a, col=1, byrow= TRUE)
d <- unlist(c)
d <- d[1:1024]
e <- ts(d)
hr1 <- HVG(e, meth = "HVG", maxL = 10^9, rho = NA)
hr1 <- hr1$A
l <- readPNG("002.png")
n <- list(l, col=1, byrow= TRUE)
p <- unlist(n)
p <- p[1:1024]
q <- ts(p)
hr2 <- HVG(q, meth = "HVG", maxL = 10^9, rho = NA)
hr3 <- hr2$A
x <- list(hr1,hr3 )
y <-nd.him(x, out.dist = TRUE)
This program is executing properly in laptop but in Amaon server it is giving error
Error in if ((as.double(RSpectra::eigs(matD, 1, which = "SM")$values)) < :
argument is of length zero
In addition: Warning message:
In do.call(.Call, args = dot_call_args) :
only 0 eigenvalue(s) converged, less than k = 1
do you have any idea how i can solve this problem?

Problem with multi-objective optimization constraints: R

I have the following code that defines two constraints I want to use in my multi-objective optimization problem, given that model1 model2 and model3 are already verifiably working before.
restrictions <- function (var) {
x <- var[1]; y <- var[2]
restrictions <- logical(2)
restrictions[1] <- (predict(get(model1), data.frame(x, y), type = "response") < 500)
restrictions[2] <- (predict(get(model1), data.frame(x, y), type = "response") > 0)
return (restrictions);
}
Building a genetic algorithm multi objective function in the following code:
fn <- function (var) {
x <- var[1]; y <- var[2]
f <- numeric(3)
f[1] <- predict(get(model1), data.frame(x, y), type = "response")
f[2] <- predict(get(model2), data.frame(x, y), type = "response")
f[3] <- predict(get(model3), data.frame(x, y), type = "response")
return (f);
}
And finally the optimization process here using mco library
library (mco)
optimum <- mco::nsga2 (fn = fn, idim = 2, odim=3,
constraints = restrictions, cdim = 2,
generations = 100,
popsize= 40,
cprob = 0.5,
cdist = 20,
mprob = 0.5,
mdist = 20,
lower.bounds = c(-80, 50),
upper.bounds = c(-70, 60)
)
The main problem is that the solution does not abide with the constraint specified. Any thoughts on that?

nls boot error must have positive length

I am getting the error below with nlsBoot() any idea what is wrong?
Error in apply(tabboot, 1, quantile, c(0.5, 0.025, 0.975)) :
dim(X) must have a positive length
set.seed(1)
x = 1:100
y = x^2+rnorm(100,50,500)
plot(x,y)
d = data.frame(x =x, y=y)
mymodel = nls(y~x^b,start= list(b=1),data = d)
mymodel
library(nlstools)
nlsBoot(mymodel, niter = 999)
Try to define the formula before applying the nls function, like this:
formula <- as.formula(y ~ x^b)
mymodel <- nls(formula,start= list(b=1),data = d)
added
Well, I've modified the code and now it can handle one parameter fit.
# My suggestion is to erase all the environment first:
rm(list = ls())
# Then we start again:
set.seed(1)
x = 1:100
y = x^2+rnorm(100,50,500)
plot(x,y)
d = data.frame(x =x, y=y)
mymodel = nls(y~x^b,start= list(b=1),data = d)
Here is the function that you have to use:
nlsboot_onepar <- function (nls, niter = 999)
{
if (!inherits(nls, "nls"))
stop("Use only with 'nls' objects")
data2 <- eval(nls$data, sys.frame(0))
fitted1 <- fitted(nls)
resid1 <- resid(nls)
var1 <- all.vars(formula(nls)[[2]])
l1 <- lapply(1:niter, function(i) {
data2[, var1] <- fitted1 + sample(scale(resid1, scale = FALSE),
replace = TRUE)
nls2 <- try(update(nls, start = as.list(coef(nls)),
data = data2), silent = TRUE)
if (inherits(nls2, "nls"))
return(list(coef = coef(nls2), rse = summary(nls2)$sigma))
})
if (sum(sapply(l1, is.null)) > niter/2)
stop(paste("Procedure aborted: the fit only converged in",
round(sum(sapply(l1, is.null))/niter), "% during bootstrapping"))
tabboot <- sapply(l1[!sapply(l1, is.null)], function(z) z$coef,simplify =
FALSE)
tabboot <- as.matrix(t(as.numeric(tabboot)))
rownames(tabboot) <- "b"
rseboot <- sapply(l1[!sapply(l1, is.null)], function(z) z$rse)
recapboot <- t(apply(tabboot, 1, quantile, c(0.5, 0.025,
0.975)))
colnames(recapboot) <- c("Median", "2.5%", "97.5%")
estiboot <- t(apply(tabboot, 1, function(z) c(mean(z), sd(z))))
colnames(estiboot) <- c("Estimate", "Std. error")
serr <- sum(sapply(l1, is.null))
if (serr > 0)
warning(paste("The fit did not converge", serr, "times during
bootstrapping"))
listboot <- list(coefboot = t(tabboot), rse = rseboot, bootCI = recapboot,
estiboot = estiboot)
class(listboot) <- "nlsBoot"
return(listboot)
}
And then we use it:
result <- nlsboot_onepar(mymodel, niter = 999)
If you want to plot the parameter distribution, you can do this:
graphics.off()
plot(density(as.vector(result$coefboot)))
# or
hist(as.vector(result$coefboot))
I hope that helps you.

Having a function output two quantities in looping structure in R?

In my R function below, I'm wondering how I can change my code such that I can get pe out of my fun function? Right now, fun only outputs L and U.
P.S. Of course, I want to keep the function work as it does right now, so therefore replicate may also need to change as a result of having fun output pe in addition to L and U.
CI.bi = function(n, p, n.sim){
fun <- function(n1 = n, p1 = p){
x <- rbinom(1, size = n1, prob = p1)
pe <- x/n1
res <- binom.test(x, n1, p1)[[4]]
c(L = res[1], U = res[2])
}
sim <- t(replicate(n.sim, fun()))
y = unlist(lapply(1:n.sim, function(x) c(x, x)))
plot(sim, y, ty = "n", ylab = NA, yaxt = "n")
segments(sim[ ,1], 1:n.sim, sim[ ,2], 1:n.sim, lend = 1)
}
# Example of use:
CI.bi(n = 15, p = .5, n.sim = 3)
You can have fun() return pe as an additional element of the return vector.
When referencing sim later on, just specify which columns you want to use. I believe the below code sample replicates your current functionality but has pe as an additional output of fun()
CI.bi = function(n, p, n.sim){
fun <- function(n1 = n, p1 = p){
x <- rbinom(1, size = n1, prob = p1)
pe <- x/n1
res <- binom.test(x, n1, p1)[[4]]
c(L = res[1], U = res[2], pe=pe)
}
sim <- t(replicate(n.sim, fun()))
y = unlist(lapply(1:n.sim, function(x) c(x, x)))
plot(sim[,1:2], y, ty = "n", ylab = NA, yaxt = "n")
segments(sim[ ,1], 1:n.sim, sim[ ,2], 1:n.sim, lend = 1)
}
CI.bi(n = 15, p = .5, n.sim = 3)

Unsupported objective function return value

Here's my code: a function to be optimized with DEoptim algorithm; the function is quite simple, indeed.
Reproducible code:
library(DEoptim)
library(sm)
tau.0 <- c(58.54620, 61.60164, 64.65708, 71.19507, 82.39836, 101.28953, 119.68789)
rate <- c(0.04594674, 0.01679026, 0.02706263, 0.04182605, 0.03753949, 0.04740187, 0.05235710)
Du <- c(4.27157210, -0.07481174, -0.10551229, 0.51753843, 1.51075420, 6.51483315, 7.35631500)
Co <- c(0.2364985, -6.2947479, -7.5422644, -1.2745887, -42.6203118, 55.7663196, 70.9541141)
h <- h.select(x = tau.0, y = rate, method = 'cv')
sm <- sm.regression(x = tau.0, y = rate, h = h)
ya <- sm$estimate
xa <- sm$eval.points
y <- approx(x = xa, y = ya, xout = tau.0, rule = 2)$y
besty <- function(x) {
dtau.0 <- x
xout <- seq(1, max(tau.0), dtau.0)
ratem <- approx(x = tau.0, y = rate / 1, xout = xout)$y
ym <- approx(x = tau.0, y = y / 1, xout = xout)$y
Dum <- approx(x = tau.0, y = Du, xout = xout)$y
Com <- approx(x = tau.0, y = Co, xout = xout)$y
dy <- NULL
for(i in 1:length(ym)) {
dy[i] <- ratem[i] - ym[i-1]
}
dy[is.na(dy)] <- na.omit(dy)[1]
Dum[is.na(Dum)] <- na.omit(Dum)[1]
Com[is.na(Com)] <- na.omit(Com)[1]
dP <- Dum * dy - .5 * Com * dy ^ 2
xout.m <- xout / 12
dcurve <- cbind(dP * 100, xout.m)
PVBP <- dcurve[which(dP == max(dP)),1]
Maty <- dcurve[which(dP == max(dP)),2]
return(- PVBP / x)
}
DEoptim(fn = besty, lower = 1, upper = 120)
To me the last command returns
ERROR: unsupported objective function return value
What's wrong with my code for which good DEoptim does not succeed in optimizing?
If I replace the last function's command line
return(- PVBP / x)
with
return(as.numeric(- PVBP / x))
it seems DEoptim works fine till few iterations, then...
> DEoptim(fn = besty, lower = 1, upper = 12)
Iteration: 1 bestvalit: -0.898391 bestmemit: 1.186242
Iteration: 2 bestvalit: -0.903304 bestmemit: 1.185117
Iteration: 3 bestvalit: -0.999273 bestmemit: 1.043355
Iteration: 4 bestvalit: -0.999273 bestmemit: 1.043355
Error in DEoptim(fn = besty, lower = 1, upper = 12) :
unsupported objective function return value
Maybe something in function syntax?
Thanks, guys :)
I don't know what exactly you are trying to do, so I can't give you a precise answer. However, here are the steps to figure out what is wrong.
Change your function to:
besty <- function(x) {
cat(x, "\n")
dtau.0 <- x
xout <- seq(1, max(tau.0), dtau.0)
<snip>
Now when you run the optimiser:
set.seed(1)
DEoptim(fn = besty, lower = 1, upper = 120)
you get the passed values printed out:
32.6
45.28
69.17
....
In particular, it breaks when the value x = 8.353 is passed.
Next, step through your function with this particular value, i.e.
x = 8.353
dtau.0 <- x
xout <- seq(1, max(tau.0), dtau.0)
ratem <- approx(x = tau.0, y = rate / 1, xout = xout)$y
ym <- approx(x = tau.0, y = y / 1, xout = xout)$y
Dum <- approx(x = tau.0, y = Du, xout = xout)$y
Com <- approx(x = tau.0, y = Co, xout = xout)$y
....
I don't know exactly what you are doing, so I can't tell you what's "wrong".
Solved due to Aaron, Joshua Ulrich and csgillespie tips.
Two modifications are required in order the code worked properly:
...
PVBP <- dcurve[which(dP == max(dP)),1]
Maty <- dcurve[which(dP == max(dP)),2]
...
must be replaced with
...
PVBP <- dcurve[which(dP == max(dP)),1][1]
Maty <- dcurve[which(dP == max(dP)),2][1]
...
while
...
return(- PVBP / x)
...
must be replaced with
...
return(as.numeric(- PVBP / x))
...
and in order to avoid NAs in objective function it is required that boundaries are set to
DEoptim(fn = besty, lower = 1, upper = max(tau.0) / 12)
Thanks guys who helped me!

Resources