I am solving the following simple linear program with pracma::linprog in R:
library(pracma)
cc=c(1)
A=as.matrix(-1)
b=c(1)
linprog(cc,A,b,maximize=FALSE)
The solution returned is x=0.
However, this solution is incorrect: a lower value of the linear program can clearly be obtained at x=-1.
I find that both Wolfram Alpha and Matlab return the correct solution.
Why does linprog return the wrong solution? Is there any way to correct this problem?
From ?linprog
This is a first version that will be unstable at times. For real
linear programming problems use package lpSolve
Which gives the same answer
lpSolve::lp(direction='min', cc,A,"<=",b)
Success: the objective function is 0
And from ?lp
Note that every variable is assumed to be >= 0!
I know that it's quite simple to solve the equation in Matlab. But is there a way to solve it in R as well? I need it for my Kalman filter implementations.
I'm searching for a package or a way to use the algorithm Adaboost.R2 (adaboost for regression) in R. I can use it in ROOT and change my loss function in "quadratic" and I want to get the same thing in R. Someone can help me? Thanks.
I'm having difficulties solving a maximization problem with constraints in R.
I've tried using constrOptim(), but i can't figure out what theta is or should be equal to.
Can anyone help?
Use the logarithm to get linear constraints and then use the constrOptim function.
I was seeking for solutions to inverse functions in R. The uniroot() indicates a possible to do this, however I don't know how.
My problem is quite simple, I guess. I have a nonlinear regression as trigonometric function:
function(d,x) d*sqrt(-0.9298239*(x-1)+0.0351246*sin(2.0737115*pi*x)+0.0037540*(1/tan(pi*x/2)))
I need to find the x value based on y value. How can I do that?
Edit: If I plot the curve using the expression (f(x)-y) I have something like the image below. Now, how could I extract the value that minimizes in zero?
minimization