Global Optimization with bounds in R [closed] - r

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 3 years ago.
Improve this question
I am looking for an optimzier that minimizes a least square problem (non-linear) for a global minimum with constraints.
I was trying to use SANN optimization in R but realised that it doesnt allow constaints. I actually just want to bind my constraint to >0 and <1.
Is there a package available for that?
Thank you very much in advance.

You could apply optim with "L-BFGS-B", which directly allows constraints. If the results are very sensitive to initial parameters, then you could minimise over a grid of initial values supplied to par and then choose the parameters that give the best result.
You could also use "SANN" with optim (or any other unconstrained optimiser), but change your initial objective function such that it's automatically constrained. For example, if you really want to minimise wrt \beta but \beta must lie between 0 and 1, then you could instead instead minimise wrt \tau and replace \beta by exp(\tau)/(1+exp(\tau)) (the logit function) in your objective function. It'll always be between 0 and 1 then.

Related

Is there a way to inform classifiers in R of the relative costs of misclassification? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
This is a general question. Are there classifiers in R -- functions that perform classification implementing classification algorithms-- that accept as input argument the relative cost of misclassification. E.g. if a misclassification of a positive to negative has cost 1 the opposite has cost 3.
If yes which are these functions?
Yes. If you are using the caret package (you should; it provides 'standardization' for 200+ classification and regression methods by wrapping almost all relevant R's packages), you can set the weights argument of the train function (see p.152; see also here) for models that support class weights. This answer lists some of the models that support class weights.

gmp for R and other sols [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
implemented a solution to a problem in arithmetic precision handeling in gmp - but the results are rather strange . as a part of troubleshooting I was wondering whether there is any other package hich woudl allow similar operatiosn as gmp in R. I would need something like chooseZ and multiplication of numbers larger than 2^64 - jsu to make sure that I am not having an error somewhere in this step of my script
need to compute nubers like
choose(2450,765) then multiply it with a floating point number like 0.0034
the log solution is not really working becasue the expression can aslo be
sum for 2 to k of (k* k*choose(1800,800)*0.089)
so Iw ould need a wauy to sum over (k kchoose(1800,800)*0.089)
You could just work on the logarithmic scale:
lchoose(2450,765) + log(0.0034)
#[1] 1511.433
If you exponentiate this, you get a really big number. I simply do not believe that this number would be different from Inf for any practical purpose and I believe even less that you'd need it to exact precision.
Edit:
If you want to calculate \sum_{i=2}^k{i^2 * choose(1800, 800) * 0.089}, you should see that this is the same as choose(1800, 800) * \sum_{i=2}^k{i^2 * 0.089} and then you can again work on the logarithmic scale.

spline approximation with specified number of intervals [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
So - edited because some of us thought that this question is off-topic.
I need to build spline (approximation) on 100 points in one of environments listed in tags. But I need it with exact number of intervals (maximum of 6 intervals - separate equations - in whole domain). Packages / libraries in R and Maxima which I know let me for building spline on this points but with 25-30 intervals (separate equations). Does anyone know how to build spline with set number of intervals without coding whole algorithm all over again?
What you're looking for might be described as "local regression" or "localized regression"; searching for those terms might turn up some hits.
I don't know if you can find exactly what you've described. But implementing it doesn't seem too complicated: (1) Split the domain into N intervals (say N=10). For each interval, (2) make a list of the data in the interval, (3) fit a low-order polynomial (e.g. cubic) to the data in the interval using least squares.
If that sounds interesting to you, I can go into details, or maybe you can work it out yourself.

Calculate differential in Fortran [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 8 years ago.
Improve this question
I want to calculate w for j=0 to n in the below function. Is there any already written library for this in FORTRAN?
Actually I want to write a program that get n from the user, and print w in output. What shall I do for differential and for creating the equation Ln(x)?
That recurrence relation will generate the n-th order Legendre polynomial, and from the xj and wj, I assume you are writing a program to perform Gauss-Legendre integration (no idea why the q(x) is there).
This Florida State page provides an LGPL Fortran90 program that calculates the nodes and weights using a tridiagonal-eigenvalue method and writing them to an external file. You could try and collect all of the contained functions and place them into a module for run-time calculation of the nodes and weights.

How to calculate Total least squares in R? (Orthogonal regression) [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 3 years ago.
Improve this question
I didn't find a function to calculate the orthogonal regression (TLS - Total Least Squares).
Is there a package with this kind of function?
Update: I mean calculate the distance of each point symmetrically and not asymmetrically as lm() does.
You might want to consider the Deming() function in package MethComp [function info]. The package also contains a detailed derivation of the theory behind Deming regression.
The following search of the R Archives also provide plenty of options:
Total Least Squares
Deming regression
Your multiple questions on CrossValidated, here and R-Help imply that you need to do a bit more work to describe exactly what you want to do, as the terms "Total least squares" and "orthogonal regression" carry some degree of ambiguity about the actual technique wanted.
Two answers:
gx.rma in the rgr package appears to do this.
Brian Ripley has given a succinct answer on this thread. Basically, you're looking for PCA, and he suggests princomp. I do, too.
I got the following solution from this url:
https://www.inkling.com/read/r-cookbook-paul-teetor-1st/chapter-13/recipe-13-5
r <- prcomp( ~ x + y )
slope <- r$rotation[2,1] / r$rotation[1,1]
intercept <- r$center[2] - slope*r$center[1]
Basically you performa PCA that will fit a line between x and y minimizing the orthogonal residuals. Then you can retrieve the intercept and slope for the first component.
For anyone coming across this question again, there exists a dedicated package 'onls' by now for that purpose. It is similar handled as the nls package (which implements ordinary least square algorithms)

Resources