I am trying to use the Rsocp package in R to solve a linear optimization problem with quadratic constraints. Much like in R - fPortfolio - Error in eqsumW[2, -1] : subscript out of bounds
More specifically I am attempting to maximize an expected return given a target risk parameter and portfolio/position limits.
install.packages("Rsocp", repos="http://R-Forge.R-project.org")
install.packages("fPortfolio")
require(fPortfolio)
require(Rsocp)
I run
lppData=100*LPP2005.RET[,1:6]
maxRetSpec=portfolioSpec()
setTargetRisk(maxRetSpec)=0.07
groupConstraints <- c("minsumW[1:6]=-.75",
"maxsumW[1:6]=1.75")
boxConstraints <- c("minW[1:6]=-1",
"maxW[1:6]=1")
bgConstraints <- c(groupConstraints, boxConstraints)
setSolver(maxRetSpec)="solveRsocp"
efficientPortfolio(data=lppData, spec=maxRetSpec, constraints=bgConstraints)
and get the following error...
Error in eqsumW[2, -1] : subscript out of bounds
It is my understand that Rsocp is a second order cone solver designed specifically for this purpose. Having gone through several different stackexchange forums it seems there are several people who have encountered this problem with unsatisfactory solutions. I was wondering if anyone has had success using the Rsocp solver that could give me a hand working through this error? Or alternatively can someone point me towards an 'R' solver that can handle this type of optimization problem?
Related
I have been solving the optimization problem in R with 2 variables using lpSolveAPI. "
one of the constraint is standarddeviation(ax1,bx2)=1.24.
I am unable to input this constraint, it throws an error message
"The length of xt must be equal to the number of decision variables in
lprec"
Could you please suggest the suitable package in R where I would be able to input the above constraint?
Hi I am able to solve the constraint with solnp algorithm in Rsolnp package . I tried other algorithm in nloptr but solnp throws better solution
I am new to the package CVXR. I am using it to do the convex optimization within each iteration of EM algorithms. Everything is fine at first but after 38 iterations, I have an error:
Error in valuesById(object, results_dict, sym_data, solver) :
Solver failed. Try another.
I am not sure why the solver works fine at first but then fails to work later. I looked up the manual about how to change the solver but could not find the answer. I am also curious about whether we can specify learning step size in CVXR. Really appreciate any help
The list of installed solvers in CVXR you can get with
installed_solvers()
In my case that is:
# "ECOS" "ECOS_BB" "SCS"
You can change the one that is used just using argument solver, e.g. to change from the default ECOS to SCS:
result <- solve(prob, solver="SCS")
I think the developers are planning to support other solvers in the future, e.g. gurobi...
I tried to apply stability function in ClustOfVar package and got an error message as below:
Error in La.svd(x, nu, nv) : error code 1 from Lapack routine 'dgesdd'.
I intended to do the variable clustering on a data set including both quantitative and qualitative variables. The R codes I uses are shown as below. At first I use the data directly (i.e., without standardization of the quantitative variables) and got the error message when running the stability function). Then I scale the quantitative variables and rerun the codes and got the same error message. Would someone give a suggestion how to fix the problem? Also, I do not think it need no step to standardize the quantitative variables because the hclustvar function should contain the standardization, right?
X.quanti<-Data4Cluster[, c(9:28)]
X.quanti2<-scale(X.quanti, center=TRUE, scale=TRUE)
X.quali<-Data4Cluster[, c(1:4,8)]
tree<-hclustvar(X.quanti,X.quali)
plot(tree)
stab<-stability(tree, B=40)
tree2<-hclustvar(X.quanti2,X.quali)
plot(tree2)
stab<-stability(tree2, B=40)
I am having exactly the same problem. The only thing that fixed it for me was changing the value of B (reducing it to 20) but I don't think it is right so I hope someone can give us a solution. My worry from searching the web is that there is a bug in the Lapack package which seems unfixable (this error is a common occurrence with various functions).
I had this error using the MASS::lda function. The error went away when I removed collinear variables from the model.
In the R package spatstat (I am using the current version, 1.31-0) , there is an option use.gam. When you set this to true, you can include smooth terms in the linear predictor, the same way you do with the R package mgcv. For example,
g <- ppm(nztrees, ~1+s(x,y), use.gam=TRUE)
Now, if I want a confidence interval for the intercept, you can usually use summary or vcov, which works when you don't use gam but fails when you do use gam
vcov(g)
which gives the error message
Error in model.frame.default(formula = fmla, data =
list(.mpl.W = c(7.09716796875, :invalid type (list) for variable 's(x, y)'
I am aware that this standard error approximation here is not justified when you use gam, but this is captured by the warning message:
In addition: Warning message: model was fitted by gam();
asymptotic variance calculation ignores this
I'm not concerned about this - I am prepared to justify the use of these standard errors for the purpose I'm using them - I just want the numbers and would like to avoid "writing-my-own" to do so.
The error message I got above does not seem to depend on the data set I'm using. I used the nztrees example here because I know it comes pre-loaded with spatstat. It seems like it's complaining about the variable itself, but the model clearly understands the syntax since it fits the model (and the predicted values, for my own dataset, look quite good, so I know it's not just pumping out garbage).
Does anybody have any tips or insights about this? Is this a bug? To my surprise, I've been unable to find any discussion of this online. Any help or hints are appreciated.
Edit: Although I have definitively answered my own question here, I will not accept my answer for the time being. That way, if someone is interested and willing to put in the effort to find a "workaround" for this without waiting for the next edition of spatstat, I can award the bounty to him/her. Otherwise, I'll just accept my own answer at the end of the bounty period.
I have contacted one of the package authors, Adrian Baddeley, about this. He promptly responded and let me know that this is indeed a bug with the software and, apparently, I am the first person to encounter it. Fortunately, it only took him a short time to track down the issue and correct it. The fix will be included in the next release of spatstat, 1.31-1.
Edit: The updated version of spatstat has been released and does not have this bug anymore:
g <- ppm(nztrees, ~1+s(x,y), use.gam=TRUE)
sqrt( vcov(g)[1,1] )
[1] 0.1150982
Warning message:
model was fitted by gam(); asymptotic variance calculation ignores this
See the spatstat website for other release notes. Thanks to all who read and participated in this thread!
I'm not sure you can specify the trend the way you have which is possibly what is causing the error. It doesn't seem to make sense according to the documentation:
The default formula, ~1, indicates the model is stationary and no trend is to be fitted.
But instead you can specify the model like so:
g <- ppm(nztrees, ~x+y, use.gam=TRUE)
#Then to extract the coefficientss:
>coef(g)
(Intercept) x y
-5.0346019490 0.0013582470 -0.0006416421
#And calculate their se:
vc <- vcov(g)
se <- sqrt(diag(vc))
> se
(Intercept) x y
0.264854030 0.002244702 0.003609366
Does this make sense/expected result? I know that the package authors are very active on the r-sig-geo mailing lsit as they have helped me in the past. You may also want to post your question to that mailing list, but you should reference your question here when you do.
I recently started experimenting with the biganalytics package for R. I ran into a problem however...
I am trying to run bigkmeans with a cluster number of about 2000 e.g clust <- bigkmeans(mymatrix, centers=2000)
However, I get the following error:
Error in 1:(10 + 2^k) : result would be too long a vector
Can someone maybe give me a hint what I am doing wrong here?
Vectors are limited by the type used for the index -- there is/was some talk about replacing this index type by a double but it hasn't happen yet and is unlikely as it may break so much existing code.
If your k is really large, you may not be able to do this the way you had planned.